We are IntechOpen, the world's leading publisher of Open Access books Built by scientists, for scientists

4,200+

Open access books available

116,000+

International authors and editors

125M+

Downloads

151 Countries delivered to Our authors are among the

Top 1%

most cited scientists

12.2% Contributors from top 500 universities

Selection of our books indexed in the Book Citation Index in Web of Science™ Core Collection (BKCI)

## Interested in publishing with us? Contact book.department@intechopen.com

Numbers displayed above are based on latest data collected. For more information visit www.intechopen.com

## **Meet the editor**

Jaydip Sen has around 20 years of experience in the field of communication networks protocol design, network analysis, cryptography and network security. He has worked with reputed organizations like Tata Consultancy Services, India, Oil and Natural Gas Corporation Ltd., India, Oracle India Pvt. Ltd., India, and Akamai Technology Pvt. Ltd, India. Currently, he is a Professor

in the Computer Science and Engineering Department of National Institute of Science & Technology, India. His research areas include security in wired and wireless networks, secure routing protocols in wireless ad hoc and sensor networks, trust and reputation based systems, and data privacy in computing. He has more than 100 publications in reputed international books, journals and referred conference proceedings. He is a member of ACM and IEEE.

Contents

**Preface VII**

Jaydip Sen

**of the Art 51** Nadia El Mrabet

Masa-aki Fukase

Xiaoqing Tan

Chapter 1 **Homomorphic Encryption — Theory and Application 1**

Chapter 2 **Optical Communication with Weak Coherent Light Fields 33** Kim Fook Lee, Yong Meng Sua and Harith B. Ahmad

Chapter 3 **Efficient Computation for Pairing Based Cryptography: A State**

Chapter 4 **A Double Cipher Scheme for Applications in Ad Hoc Networks**

**and its VLSI Implementations 85**

Chapter 5 **Introduction to Quantum Cryptography 111**

## Contents

#### **Preface XI**


Preface

cryptography.

In an age of explosive worldwide growth of electronic data storage and communications, effective protection of information has become a critical requirement. Especially when used in coordination with other tools for information security, cryptography in all of its applica‐ tions, including data confidentiality, data integrity, and user authentication, is the most powerful tool for protecting information. While the importance of cryptographic technique, i.e., encryption, in protecting sensitive and critical information and resources cannot be overemphasized, examination of technical evolution within several industries reveals an ap‐ proaching precipice of scientific change. The glacially paced, but inevitable convergence of quantum mechanics, nanotechnology, computer science, and applied mathematics, will rev‐ olutionize modern technology. The implications of such changes will be far reaching, with one of its greatest impacts, affecting information security. More specifically, that of modern

The theoretical numerists, responsible for cryptography's algorithmic complexities, will be affected by this scientific conglomeration, although numerologists should not be concerned. The subsequent adaptation and remodeling of classical cryptography will be a fascinating, and yet undetermined process. Of course, this would all be irrelevant if we could just stand‐ ardize the use of Arithmancy and it powers of numerical divination. Then again, we live in a society where mysticism is left to those practicing the pseudoscience of applied mathematics. In addition to the Intel 8080 and disco, the 1970s gave us public-key cryptography. Popular‐ ized by the RSA cryptosystems, it introduced new method for encryption that overcame the security issues of key exchange with symmetric cryptosystems. Because generating large prime number is much easier than factoring them, public key systems have proved to be a mathematically strong approach. The need to evaluate the security of factor-based crypto‐ systems, has led to advancements in integer factorization algorithms. From the simplistic use of trial division, to the more advanced techniques of Fermat or Pollard Rho factoriza‐ tion, to the sophisticated general number field sieve algorithm- improved approaches using computer processing have left only the strongest of cryptosystems and key sizes intact.

However, the 30-year crypto-evolution of public-key cryptography has shown some remark‐ ably difficult and complex advances in computational number theory. One of the bestknown examples is well documented in the results of the RSA Factoring Challenge. Starting in 1991, with the smallest key, RSA-100 (100 decimal digits), being cracked within the first two weeks, approximately 50 RSA numbers of increasing length were published, in an open factorization contest. Ending in 2007, with 12 solutions, the event resulted in the achieve‐ ment of numerous mathematical milestones for the crypto community, including the factori‐ zation of RSA-640 (192 decimal digits = 640 binary) in 2005. While this event is considered

### Preface

In an age of explosive worldwide growth of electronic data storage and communications, effective protection of information has become a critical requirement. Especially when used in coordination with other tools for information security, cryptography in all of its applica‐ tions, including data confidentiality, data integrity, and user authentication, is the most powerful tool for protecting information. While the importance of cryptographic technique, i.e., encryption, in protecting sensitive and critical information and resources cannot be overemphasized, examination of technical evolution within several industries reveals an ap‐ proaching precipice of scientific change. The glacially paced, but inevitable convergence of quantum mechanics, nanotechnology, computer science, and applied mathematics, will rev‐ olutionize modern technology. The implications of such changes will be far reaching, with one of its greatest impacts, affecting information security. More specifically, that of modern cryptography.

The theoretical numerists, responsible for cryptography's algorithmic complexities, will be affected by this scientific conglomeration, although numerologists should not be concerned. The subsequent adaptation and remodeling of classical cryptography will be a fascinating, and yet undetermined process. Of course, this would all be irrelevant if we could just stand‐ ardize the use of Arithmancy and it powers of numerical divination. Then again, we live in a society where mysticism is left to those practicing the pseudoscience of applied mathematics.

In addition to the Intel 8080 and disco, the 1970s gave us public-key cryptography. Popular‐ ized by the RSA cryptosystems, it introduced new method for encryption that overcame the security issues of key exchange with symmetric cryptosystems. Because generating large prime number is much easier than factoring them, public key systems have proved to be a mathematically strong approach. The need to evaluate the security of factor-based crypto‐ systems, has led to advancements in integer factorization algorithms. From the simplistic use of trial division, to the more advanced techniques of Fermat or Pollard Rho factoriza‐ tion, to the sophisticated general number field sieve algorithm- improved approaches using computer processing have left only the strongest of cryptosystems and key sizes intact.

However, the 30-year crypto-evolution of public-key cryptography has shown some remark‐ ably difficult and complex advances in computational number theory. One of the bestknown examples is well documented in the results of the RSA Factoring Challenge. Starting in 1991, with the smallest key, RSA-100 (100 decimal digits), being cracked within the first two weeks, approximately 50 RSA numbers of increasing length were published, in an open factorization contest. Ending in 2007, with 12 solutions, the event resulted in the achieve‐ ment of numerous mathematical milestones for the crypto community, including the factori‐ zation of RSA-640 (192 decimal digits = 640 binary) in 2005. While this event is considered

more successful than the "pi" eating contest held by the Association of Symmetric Security, private key cryptography is a complementary, efficient, and secure method of encryption. When compared to public-key systems, the processing time is reduced by an order of 102-103, not requiring the computational overhead for key correlation. Whether it's using block or stream ciphers, the use of single key for both encryption and decryption relies on a high level of key integrity. In addition, its entropy, secure key transfer, or key distribution, are fundamental to its success, ensuring only the sender and recipient have a copy, without any risk of compromise. Even the security of the most secure scheme ever known, one-timepad, is powerless when improperly implemented or can be vulnerable to man-in-the-middle attack as a two- or three-time pads.

Furthermore, there are several cryptosystems thought to be resilient to cryptographic attack from both traditional computers and quantum computers, such as: (i) Hash-based cryptog‐ raphy, (ii) Code-based cryptography, (iii) Lattice-based cryptography, (iv) Multivariate-

Preface IX

Encryption may not be the most glamorous layer of security, but when properly implement‐ ed, it's probably the most sophisticated and strongest layer. A majority of the real world defense strategies still lack a cryptographic layer. When faced with relevant security con‐ cerns to address, IT managers should be allocating resources to the weak areas of the securi‐ ty chain. For those who've never spent time reading the works of Schneier: Not only is security a process, but it should be thought of as a chain whose strength is only as strong as its weakest link. Encryption is usually one the strongest in the link. Fix the areas that pose the real threats: social engineering, end-user training, policy enforcement, perimeter securi‐

How many automated exploit tools exist that allow script-kiddies to launch cryptanalytic attacks? Have there been some sort of underground crypto-cons going on that have eluded

About the book: The purpose of this book is to discuss some of the critical security challeng‐ es that are being faced by today's computing world and mechanisms to defend against them using classical and modern techniques of cryptography. With this goal, the book presents a collection of research work of some of the experts in the field of cryptography and networks

In Chapter 1 entitled "Homomorphic Encryption: Theory and Applications", Sen has dis‐ cussed a very important techniques of encryption- homomorphic encryption – a technique that allows meaningful computations to be carried out on encrypted data to produce an out‐ put of a function so that the privacy of the inputs to the function is protected. With a formal introduction to this sophisticated encryption technique, the chapter has provided a detailed survey of various contemporary homomorphic encryption schemes including the most pow‐ erful fully homomorphic encryption mechanisms. A list of emerging research directions in the field of homomorphic encryption has also been outlined which has become particularly

In Chapter 2: "Optical Communications with Weak Coherent Light Fields", Fook et al. have investigated how two orthogonal coherent light fields can be used to establish a correlation function between two distant observers. The authors have proposed a novel optical commu‐ nication system based on weak coherent light fields to demonstrate the validity of their

In Chapter 3: "Efficient Computation for Pairing Based Cryptography: A State of the Art", El Mrabet has introduced the concept of pairing-based cryptography and discussed in detailed various pairings, such as, Weil pairing, Tate pairing, Eta pairing, and Ate pairing. The au‐ thor has also presented mathematical analysis and optimizations of various types of pairing

In Chapter 4: "A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations", Fukase has proposed two cipher schemes for ad hoc networks. The first scheme is based on random addressing and the second one uses a data sealing algorithm.

The book consists of five chapters that address different contemporary security issues.

hacker on radar? Or perhaps we should fear the Quantum Hacking group?

relevant with large-scale adoption of cloud computing technologies.

quadratic-equations cryptography, and (v) Secret-key cryptography.

ty, firewall rules, secure coding, and so on.

security.

proposition.

schemes.

It is important to note how PCs have evolved alongside cryptography, since this all takes place on some level of computer hardware. In a general sense, given a strong algorithm, the increase in computational power has necessitated the use of large key sizes. For example, brute force attacks require a theoretical average of trying out half of all the possible key combinations before actually making a correct hit. Therefore, increasing the key size makes the job of brute force attackers exponentially more difficult. For each additional bit added, the key space is doubled, thus doubling the work required to crack (this is an oversimplifi‐ cation of a complex analytic process). However, a battle of computer processing power ver‐ sus mathematical complexity has been one of the fundamental challenges of maintaining cryptographic security. Examining the landscape of modern computing reveals both legiti‐ mate and questionable concerns. Using parallel processing and distributed computing, the time required to break keys can be reduced. Theoretically, with n numbers of computers, the time required to crack a key is 1/n times the time required using one computer. On the other hand, with the possibility (or the realization) that Moore's Law will soon be no more, it may be that physical constraints of conventional silicon chips will be outpaced by the conceptual constrains of mathematics.

Quantum computing is often discussed as the disruptive technology that will transform computer science. The theoretical blueprints for quantum computers were drafted by Ri‐ chard Feynman over 30 years ago and currently several companies have prototypes designs, with manufacturing claims. However, the optimism of this field and its potential impact is somewhat premature. The scientific media have repeatedly voiced the concerns of quantum computers annihilating the existing public key cryptosystems. A news release from Accen‐ ture says "A breakthrough in quantum or molecular computing could leave today's com‐ puter-and IT security systems in the dust." Science Daily describes the arms race that will result from the post-apocalyptic world of post-quantum computing. Why is the technology so threatening to cryptographic security?

Perhaps it's the publicized predictions that quantum cryptanalysis will mark the downfall of classical cryptography. In 1984, while in AT&T Bell Laboratories, mathematician Peter Shor developed a quantum algorithm that can factor large numbers in polynomial time. Transi‐ tioning from classical to quantum computing, Shor's algorithm has the potential to break all of the existing public key schemes used today. In 1996, Lov Grover created a database search algorithm that provided powerful quantum calculations through functional inver‐ sion. Grover's algorithm provided quadratic improvements over brute force approach. Ap‐ plications of these mathematical attacks are proofs of concept in the absence of theoretical hardware.

Furthermore, there are several cryptosystems thought to be resilient to cryptographic attack from both traditional computers and quantum computers, such as: (i) Hash-based cryptog‐ raphy, (ii) Code-based cryptography, (iii) Lattice-based cryptography, (iv) Multivariatequadratic-equations cryptography, and (v) Secret-key cryptography.

more successful than the "pi" eating contest held by the Association of Symmetric Security, private key cryptography is a complementary, efficient, and secure method of encryption. When compared to public-key systems, the processing time is reduced by an order of 102-103, not requiring the computational overhead for key correlation. Whether it's using block or stream ciphers, the use of single key for both encryption and decryption relies on a high level of key integrity. In addition, its entropy, secure key transfer, or key distribution, are fundamental to its success, ensuring only the sender and recipient have a copy, without any risk of compromise. Even the security of the most secure scheme ever known, one-timepad, is powerless when improperly implemented or can be vulnerable to man-in-the-middle

It is important to note how PCs have evolved alongside cryptography, since this all takes place on some level of computer hardware. In a general sense, given a strong algorithm, the increase in computational power has necessitated the use of large key sizes. For example, brute force attacks require a theoretical average of trying out half of all the possible key combinations before actually making a correct hit. Therefore, increasing the key size makes the job of brute force attackers exponentially more difficult. For each additional bit added, the key space is doubled, thus doubling the work required to crack (this is an oversimplifi‐ cation of a complex analytic process). However, a battle of computer processing power ver‐ sus mathematical complexity has been one of the fundamental challenges of maintaining cryptographic security. Examining the landscape of modern computing reveals both legiti‐ mate and questionable concerns. Using parallel processing and distributed computing, the time required to break keys can be reduced. Theoretically, with n numbers of computers, the time required to crack a key is 1/n times the time required using one computer. On the other hand, with the possibility (or the realization) that Moore's Law will soon be no more, it may be that physical constraints of conventional silicon chips will be outpaced by the conceptual

Quantum computing is often discussed as the disruptive technology that will transform computer science. The theoretical blueprints for quantum computers were drafted by Ri‐ chard Feynman over 30 years ago and currently several companies have prototypes designs, with manufacturing claims. However, the optimism of this field and its potential impact is somewhat premature. The scientific media have repeatedly voiced the concerns of quantum computers annihilating the existing public key cryptosystems. A news release from Accen‐ ture says "A breakthrough in quantum or molecular computing could leave today's com‐ puter-and IT security systems in the dust." Science Daily describes the arms race that will result from the post-apocalyptic world of post-quantum computing. Why is the technology

Perhaps it's the publicized predictions that quantum cryptanalysis will mark the downfall of classical cryptography. In 1984, while in AT&T Bell Laboratories, mathematician Peter Shor developed a quantum algorithm that can factor large numbers in polynomial time. Transi‐ tioning from classical to quantum computing, Shor's algorithm has the potential to break all of the existing public key schemes used today. In 1996, Lov Grover created a database search algorithm that provided powerful quantum calculations through functional inver‐ sion. Grover's algorithm provided quadratic improvements over brute force approach. Ap‐ plications of these mathematical attacks are proofs of concept in the absence of theoretical

attack as a two- or three-time pads.

VIII Preface

constrains of mathematics.

hardware.

so threatening to cryptographic security?

Encryption may not be the most glamorous layer of security, but when properly implement‐ ed, it's probably the most sophisticated and strongest layer. A majority of the real world defense strategies still lack a cryptographic layer. When faced with relevant security con‐ cerns to address, IT managers should be allocating resources to the weak areas of the securi‐ ty chain. For those who've never spent time reading the works of Schneier: Not only is security a process, but it should be thought of as a chain whose strength is only as strong as its weakest link. Encryption is usually one the strongest in the link. Fix the areas that pose the real threats: social engineering, end-user training, policy enforcement, perimeter securi‐ ty, firewall rules, secure coding, and so on.

How many automated exploit tools exist that allow script-kiddies to launch cryptanalytic attacks? Have there been some sort of underground crypto-cons going on that have eluded hacker on radar? Or perhaps we should fear the Quantum Hacking group?

About the book: The purpose of this book is to discuss some of the critical security challeng‐ es that are being faced by today's computing world and mechanisms to defend against them using classical and modern techniques of cryptography. With this goal, the book presents a collection of research work of some of the experts in the field of cryptography and networks security.

The book consists of five chapters that address different contemporary security issues.

In Chapter 1 entitled "Homomorphic Encryption: Theory and Applications", Sen has dis‐ cussed a very important techniques of encryption- homomorphic encryption – a technique that allows meaningful computations to be carried out on encrypted data to produce an out‐ put of a function so that the privacy of the inputs to the function is protected. With a formal introduction to this sophisticated encryption technique, the chapter has provided a detailed survey of various contemporary homomorphic encryption schemes including the most pow‐ erful fully homomorphic encryption mechanisms. A list of emerging research directions in the field of homomorphic encryption has also been outlined which has become particularly relevant with large-scale adoption of cloud computing technologies.

In Chapter 2: "Optical Communications with Weak Coherent Light Fields", Fook et al. have investigated how two orthogonal coherent light fields can be used to establish a correlation function between two distant observers. The authors have proposed a novel optical commu‐ nication system based on weak coherent light fields to demonstrate the validity of their proposition.

In Chapter 3: "Efficient Computation for Pairing Based Cryptography: A State of the Art", El Mrabet has introduced the concept of pairing-based cryptography and discussed in detailed various pairings, such as, Weil pairing, Tate pairing, Eta pairing, and Ate pairing. The au‐ thor has also presented mathematical analysis and optimizations of various types of pairing schemes.

In Chapter 4: "A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations", Fukase has proposed two cipher schemes for ad hoc networks. The first scheme is based on random addressing and the second one uses a data sealing algorithm. The double cipher scheme proposed by the author uses built-in random number generators in the microprocessors. The details of a VLSI implementation of the cipher scheme have also been presented by the author.

In Chapter 5: "Introduction to Quantum Cryptography", Tan has discussed various issues in quantum cryptography particularly focusing attention of three aspects: quantum key distri‐ bution, quantum secret sharing, and post-quantum cryptography issues. In addition to dis‐ cussing various fundamental concepts of quantum cryptography, the author has also thrown some light on post-quantum era in which the currently public key cryptography will no longer be secure.

I am confident that the book will be very useful for researchers, engineers, graduate and doctoral students working in the field of cryptography. It will also be very useful for faculty members of graduate schools and universities. However, since it is not a basic tutorial on cryptography, it does not contain any chapter dealing with any detailed introductory infor‐ mation on any fundamental concept in cryptography. The readers need to have at least some basic knowledge on theoretical cryptography before reading the chapters in this book. Some of the chapters present in-depth cryptography and security related theories and latest updates in a particular research area that might be useful to advanced readers and research‐ ers in identifying their research directions and formulating problems to solve.

I express my sincere thanks to the authors of different chapters of the book without whose invaluable contributions this project could not have been successfully completed. All the au‐ thors have been extremely cooperative on different occasions during the submission, re‐ view, and editing process of the book. I would like to express my special thanks to Ms. Sandra Bakic of InTech Publisher for her support, encouragement, patience and cooperation during the entire period of publication of the book. I will be failing in my duty if I do not acknowledge the encouragement, motivation, and assistance that I received from my stu‐ dents in National Institute of Science and Technology, Odisha, India. While it will be impos‐ sible for me to mention name of each of them, the contributions of Swetashree Mishra, Swati Choudhury, Ramesh Kumar, Isha Bharati, Prateek Nayak and Aiswarya Mohapatra have been invaluable. Last but not the least, I would like to thank my mother Krishna Sen, my wife Nalanda Sen and my daughter Ritabrata Sen for being the major sources of my motiva‐ tion and inspiration during the entire period of the publication of this volume.

#### **Professor Jaydip Sen**

**Chapter 1**

**Homomorphic Encryption — Theory and Application**

The demand for privacy of digital data and of algorithms for handling more complex structures have increased exponentially over the last decade. This goes in parallel with the growth in communication networks and their devices and their increasing capabilities. At the same time, these devices and networks are subject to a great variety of attacks involving manipulation and destruction of data and theft of sensitive information. For storing and accessing data securely, current technology provides several methods of guaranteeing privacy such as data encryption and usage of tamper-resistant hardwares. However, the critical problem arises when there is a requirement for computing (publicly) with private data or to modify functions or algorithms in such a way that they are still executable while their privacy is ensured. This is where homomorphic cryptosystems can be used since these systems enable computations

In 1978 Rivest et al. (Rivest et al, 1978a) first investigated the design of a homomorphic encryption scheme. Unfortunately, their *privacy homomorphism* was broken a couple of years later by Brickell and Yacobi (Brickell & Yacobi, 1987). The question rose again in 1991 when Feigenbaum and Merritt (Feigenbaum & Merritt, 1991) raised an important question: *is there an encryption function (E) such that both E(x + y) and E(x.y) are easy to compute from E(x) and E(y)?* Essentially, the question is intended to investigate whether there is any algebraically homomorphic encryption scheme that can be designed. Unfortunately, there has been a very little progress in determining whether such encryption schemes exist that are efficient and secure until 2009 when Craig Gentry, in his seminal paper, theoretically demonstrated the possibility of construction such an encryption system (Gentry, 2009). In this chapter, we will discuss various aspects of homomorphic encryption schemes – their definitions, requirements, applications, formal constructions, and the limitations of the current homomorphic encryption schemes. We will also briefly discuss some of the emerging trends in research in this field of

> © 2013 Sen; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Sen; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Additional information is available at the end of the chapter

Jaydip Sen

**1. Introduction**

with encrypted data.

computer science.

http://dx.doi.org/10.5772/56687

Department of Computer Science & Engineering National Institute of Science and Technology, Odisha India

## **Homomorphic Encryption — Theory and Application**

Jaydip Sen

The double cipher scheme proposed by the author uses built-in random number generators in the microprocessors. The details of a VLSI implementation of the cipher scheme have also

In Chapter 5: "Introduction to Quantum Cryptography", Tan has discussed various issues in quantum cryptography particularly focusing attention of three aspects: quantum key distri‐ bution, quantum secret sharing, and post-quantum cryptography issues. In addition to dis‐ cussing various fundamental concepts of quantum cryptography, the author has also thrown some light on post-quantum era in which the currently public key cryptography will

I am confident that the book will be very useful for researchers, engineers, graduate and doctoral students working in the field of cryptography. It will also be very useful for faculty members of graduate schools and universities. However, since it is not a basic tutorial on cryptography, it does not contain any chapter dealing with any detailed introductory infor‐ mation on any fundamental concept in cryptography. The readers need to have at least some basic knowledge on theoretical cryptography before reading the chapters in this book. Some of the chapters present in-depth cryptography and security related theories and latest updates in a particular research area that might be useful to advanced readers and research‐

I express my sincere thanks to the authors of different chapters of the book without whose invaluable contributions this project could not have been successfully completed. All the au‐ thors have been extremely cooperative on different occasions during the submission, re‐ view, and editing process of the book. I would like to express my special thanks to Ms. Sandra Bakic of InTech Publisher for her support, encouragement, patience and cooperation during the entire period of publication of the book. I will be failing in my duty if I do not acknowledge the encouragement, motivation, and assistance that I received from my stu‐ dents in National Institute of Science and Technology, Odisha, India. While it will be impos‐ sible for me to mention name of each of them, the contributions of Swetashree Mishra, Swati Choudhury, Ramesh Kumar, Isha Bharati, Prateek Nayak and Aiswarya Mohapatra have been invaluable. Last but not the least, I would like to thank my mother Krishna Sen, my wife Nalanda Sen and my daughter Ritabrata Sen for being the major sources of my motiva‐

**Professor Jaydip Sen**

India

Department of Computer Science & Engineering National Institute of Science and Technology, Odisha

ers in identifying their research directions and formulating problems to solve.

tion and inspiration during the entire period of the publication of this volume.

been presented by the author.

no longer be secure.

X Preface

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/56687

#### **1. Introduction**

The demand for privacy of digital data and of algorithms for handling more complex structures have increased exponentially over the last decade. This goes in parallel with the growth in communication networks and their devices and their increasing capabilities. At the same time, these devices and networks are subject to a great variety of attacks involving manipulation and destruction of data and theft of sensitive information. For storing and accessing data securely, current technology provides several methods of guaranteeing privacy such as data encryption and usage of tamper-resistant hardwares. However, the critical problem arises when there is a requirement for computing (publicly) with private data or to modify functions or algorithms in such a way that they are still executable while their privacy is ensured. This is where homomorphic cryptosystems can be used since these systems enable computations with encrypted data.

In 1978 Rivest et al. (Rivest et al, 1978a) first investigated the design of a homomorphic encryption scheme. Unfortunately, their *privacy homomorphism* was broken a couple of years later by Brickell and Yacobi (Brickell & Yacobi, 1987). The question rose again in 1991 when Feigenbaum and Merritt (Feigenbaum & Merritt, 1991) raised an important question: *is there an encryption function (E) such that both E(x + y) and E(x.y) are easy to compute from E(x) and E(y)?* Essentially, the question is intended to investigate whether there is any algebraically homomorphic encryption scheme that can be designed. Unfortunately, there has been a very little progress in determining whether such encryption schemes exist that are efficient and secure until 2009 when Craig Gentry, in his seminal paper, theoretically demonstrated the possibility of construction such an encryption system (Gentry, 2009). In this chapter, we will discuss various aspects of homomorphic encryption schemes – their definitions, requirements, applications, formal constructions, and the limitations of the current homomorphic encryption schemes. We will also briefly discuss some of the emerging trends in research in this field of computer science.

© 2013 Sen; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Sen; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The chapter is organized as follows. In Section 2, we provide some basic and fundamental information on cryptography and various types of encryption schemes. Section 3 presents a formal discussion on homomorphic encryption schemes and discusses their various features. In Section 4, we discuss some of the most well-known and classical homomorphic encryption schemes in the literature. Section 5 provides a brief presentation on various properties and applications of homomorphic cryptosystems. Section 6 presents a discussion on fully homo‐ morphic encryption schemes which are the most powerful encryption schemes for providing a framework for computing over encrypted data. Finally, Section 7 concludes the chapter while outlining a number of research directions and emerging trends in this exciting field of computation which has a tremendous potential of finding applications in the real-world deployments.

encryption key. Shannon also proved that One-Time Pad (Vernam, 1926) encryption scheme is perfectly secure under certain conditions. However, no other encryption scheme has been proved to be unconditionally secure. For asymmetric schemes, we can rely on their mathe‐ matical structures to estimate their security strength in a formal way. These schemes are based on some well-identified mathematical problems which are hard to solve in general, but easy to solve for the one who knows the trapdoor – i.e., the owner of the keys. However, the estimation of the security level of these schemes may not always be correct due to several reasons. First, there may be other ways to break the system than solving the mathematical problems on which these schemes are based (Ajtai & Dwork, 1997; Nguyen & Stern, 1999). Second, most of the security proofs are performed in an idealized model called *random oracle model*, in which involved primitives, for example, hash functions, are considered truly random. This model has allowed the study of the security level of numerous asymmetric ciphers. However, we are now able to perform proofs in a more realistic model called *standard model* (Canetti et al., 1998; Paillier, 2007). This model eliminates some of the unrealistic assumptions in the random oracle model and makes the security analysis of cryptographic schemes more

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

3

Usually, to evaluate the attack capacity of an adversary, we distinguish among several contexts (Diffie & Hellman, 1976): *cipher-text only attacks* (where the adversary has access only to some ciphertexts), *known-plaintext attacks* (where the adversary has access to some pairs of plaintext messages and their corresponding ciphertexts), *chosen-plaintext attacks* (the adversary has access to a decryption oracle that behaves like a black-box and takes a ciphertext as its input and outputs the corresponding plaintexts). The first context is the most frequent in real-world since it can happen when some adversary eavesdrops on a communication channel. The other cases may seem difficult to achieve, and may arise when the adversary is in a more powerful position; he may, for example, have stolen some plaintexts or an encryption engine. The *chosen* one exists in adaptive versions, where the opponents can wait for a computation result before

**Probabilistic encryption:** Almost all the well-known cryptosystems are *deterministic*. This means that for a fixed encryption key, a given plaintext will always be encrypted into the same ciphertext under these systems. However, this may lead to some security problems. RSA scheme is a good example for explaining this point. Let us consider the following points with

**•** A particular plaintext may be encrypted in a too much structured way. With RSA, messages

**•** It may be easy to compute some partial information about the plaintext: with RSA, the ciphertext *c* leaks one bit of information about the plaintext *m*, namely, the so called Jacobi

**•** When using a deterministic encryption scheme, it is easy to detect when the same message

In view of the problems stated above, we prefer encryption schemes to be probabilistic. In case of symmetric schemes, we introduce a random vector in the encryption process (e.g., in the

choosing the next input (Fontaine & Galand, 2007).

0 and 1 are always encrypted as 0 and 1, respectively.

is sent twice while being processed with the same key.

reference to the RSA cryptosystem:

symbol (Fontaine & Galand, 2007).

practical.

#### **2. Fundamentals of cryptography**

In this Section, we will recall some important concepts on encryption schemes. For more detailed information, the reader may refer to (Menezes et al., 1997; Van Tilborg, 2011). Encryption schemes are designed to preserve confidentiality. The security of encryption schemes must not rely on the obfuscation of their codes, but it should only be based on the secrecy of the key used in the encryption process. Encryption schemes are broadly of two types: *symmetric* and *asymmetric* encryption schemes. In the following, we present a very brief discussion on each of these schemes.

**Symmetric encryption schemes**: In these schemes, the sender and the receiver agree on the key they will use before establishing any secure communication session. Therefore, it is not possible for two persons who never met before to use such schemes directly. This also implies that in order to communicate with different persons, we must have a different key for each people. Requirement of large number of keys in these schemes make their key generation and management relatively more complex operations. However, symmetric schemes present the advantage of being very fast and they are used in applications where speed of execution is a paramount requirement. Among the existing symmetric encryption systems, AES (Daemen & Rijmen, 2000; Daemen & Rijmen, 2002), One-Time Pad (Vernam, 1926) and Snow (Ekdahl & Johansson, 2002) are very popular.

**Asymmetric encryption schemes:** In these schemes, every participant has a pair of keysprivate and public. While the private key of a person is known to only her, the public key of each participant is known to everyone in the group. Such schemes are more secure than their symmetric counterparts and they don't need any prior agreement between the communicating parties on a common key before establishing a session of communication. RSA (Rivest et al., 1978b) and ElGamal (ElGamal, 1985) are two most popular asymmetric encryption systems.

**Security of encryption schemes:** Security of encryption schemes was first formalized by Shannon (Shannon, 1949). In his seminal paper, Shannon first introduced the notion of perfect secrecy/unconditional secrecy, which characterizes encryption schemes for which the knowl‐ edge of a ciphertext does not give any information about the corresponding plaintext and the encryption key. Shannon also proved that One-Time Pad (Vernam, 1926) encryption scheme is perfectly secure under certain conditions. However, no other encryption scheme has been proved to be unconditionally secure. For asymmetric schemes, we can rely on their mathe‐ matical structures to estimate their security strength in a formal way. These schemes are based on some well-identified mathematical problems which are hard to solve in general, but easy to solve for the one who knows the trapdoor – i.e., the owner of the keys. However, the estimation of the security level of these schemes may not always be correct due to several reasons. First, there may be other ways to break the system than solving the mathematical problems on which these schemes are based (Ajtai & Dwork, 1997; Nguyen & Stern, 1999). Second, most of the security proofs are performed in an idealized model called *random oracle model*, in which involved primitives, for example, hash functions, are considered truly random. This model has allowed the study of the security level of numerous asymmetric ciphers. However, we are now able to perform proofs in a more realistic model called *standard model* (Canetti et al., 1998; Paillier, 2007). This model eliminates some of the unrealistic assumptions in the random oracle model and makes the security analysis of cryptographic schemes more practical.

The chapter is organized as follows. In Section 2, we provide some basic and fundamental information on cryptography and various types of encryption schemes. Section 3 presents a formal discussion on homomorphic encryption schemes and discusses their various features. In Section 4, we discuss some of the most well-known and classical homomorphic encryption schemes in the literature. Section 5 provides a brief presentation on various properties and applications of homomorphic cryptosystems. Section 6 presents a discussion on fully homo‐ morphic encryption schemes which are the most powerful encryption schemes for providing a framework for computing over encrypted data. Finally, Section 7 concludes the chapter while outlining a number of research directions and emerging trends in this exciting field of computation which has a tremendous potential of finding applications in the real-world

2 Theory and Practice of Cryptography and Network Security Protocols and Technologies

In this Section, we will recall some important concepts on encryption schemes. For more detailed information, the reader may refer to (Menezes et al., 1997; Van Tilborg, 2011). Encryption schemes are designed to preserve confidentiality. The security of encryption schemes must not rely on the obfuscation of their codes, but it should only be based on the secrecy of the key used in the encryption process. Encryption schemes are broadly of two types: *symmetric* and *asymmetric* encryption schemes. In the following, we present a very brief

**Symmetric encryption schemes**: In these schemes, the sender and the receiver agree on the key they will use before establishing any secure communication session. Therefore, it is not possible for two persons who never met before to use such schemes directly. This also implies that in order to communicate with different persons, we must have a different key for each people. Requirement of large number of keys in these schemes make their key generation and management relatively more complex operations. However, symmetric schemes present the advantage of being very fast and they are used in applications where speed of execution is a paramount requirement. Among the existing symmetric encryption systems, AES (Daemen & Rijmen, 2000; Daemen & Rijmen, 2002), One-Time Pad (Vernam, 1926) and Snow (Ekdahl &

**Asymmetric encryption schemes:** In these schemes, every participant has a pair of keysprivate and public. While the private key of a person is known to only her, the public key of each participant is known to everyone in the group. Such schemes are more secure than their symmetric counterparts and they don't need any prior agreement between the communicating parties on a common key before establishing a session of communication. RSA (Rivest et al., 1978b) and ElGamal (ElGamal, 1985) are two most popular asymmetric encryption systems. **Security of encryption schemes:** Security of encryption schemes was first formalized by Shannon (Shannon, 1949). In his seminal paper, Shannon first introduced the notion of perfect secrecy/unconditional secrecy, which characterizes encryption schemes for which the knowl‐ edge of a ciphertext does not give any information about the corresponding plaintext and the

deployments.

**2. Fundamentals of cryptography**

discussion on each of these schemes.

Johansson, 2002) are very popular.

Usually, to evaluate the attack capacity of an adversary, we distinguish among several contexts (Diffie & Hellman, 1976): *cipher-text only attacks* (where the adversary has access only to some ciphertexts), *known-plaintext attacks* (where the adversary has access to some pairs of plaintext messages and their corresponding ciphertexts), *chosen-plaintext attacks* (the adversary has access to a decryption oracle that behaves like a black-box and takes a ciphertext as its input and outputs the corresponding plaintexts). The first context is the most frequent in real-world since it can happen when some adversary eavesdrops on a communication channel. The other cases may seem difficult to achieve, and may arise when the adversary is in a more powerful position; he may, for example, have stolen some plaintexts or an encryption engine. The *chosen* one exists in adaptive versions, where the opponents can wait for a computation result before choosing the next input (Fontaine & Galand, 2007).

**Probabilistic encryption:** Almost all the well-known cryptosystems are *deterministic*. This means that for a fixed encryption key, a given plaintext will always be encrypted into the same ciphertext under these systems. However, this may lead to some security problems. RSA scheme is a good example for explaining this point. Let us consider the following points with reference to the RSA cryptosystem:


In view of the problems stated above, we prefer encryption schemes to be probabilistic. In case of symmetric schemes, we introduce a random vector in the encryption process (e.g., in the pseudo-random generator for stream ciphers, or in the operating mode for block ciphers) – generally called *initial vector* (IV). This vector may be public and it may be transmitted in a clear-text form. However, the IV must be changed every time we encrypt a message. In case of asymmetric ciphers, the security analysis is more mathematical and formal, and we want the randomized schemes to remain analyzable in the same way as the deterministic schemes. Researchers have proposed some models to randomize the existing deterministic schemes, as the *optimal asymmetric encryption padding* (OAEP) for RSA (or any scheme that is based on a trapdoor one-way permutation) (Bellare & Rogaway, 1995). In the literature, researchers have also proposed some other randomized schemes (ElGamal, 1985; Goldwasser & Micali, 1982; Blum & Goldwasser, 1985).

*<sup>c</sup>* <sup>=</sup>*E*(11*<sup>σ</sup>*, *ke*, *<sup>m</sup>*) then *Prob <sup>D</sup>*(1*<sup>σ</sup>*, *<sup>k</sup>*, *<sup>c</sup>*) <sup>≠</sup>*m* is negligible, i.e., it holds that

**• Homomorphic Property:** *<sup>A</sup>* is an algorithm that on inputs 1*σ*, *ke* , and elements *c*1, *<sup>c</sup>*<sup>2</sup> <sup>∈</sup>*<sup>C</sup>* outputs an element *c*<sup>3</sup> ∈*C* so that for all *m*1, *m*<sup>2</sup> ∈*M* it holds: if *m*<sup>3</sup> =*m*<sup>1</sup> *o m*<sup>2</sup> and *<sup>c</sup>*<sup>1</sup> <sup>=</sup> *<sup>E</sup>*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*1), and *c*<sup>2</sup> <sup>=</sup> *<sup>E</sup>*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*2), then *Prob <sup>D</sup>*(*A*(1*<sup>σ</sup>*, *ke*, *<sup>c</sup>*1, *<sup>c</sup>*2)) <sup>≠</sup> *<sup>m</sup>*3 is negligible.

Informally speaking, a homomorphic cryptosystem is a cryptosystem with the additional property that there exists an efficient algorithm to compute an encryption of the sum or the product, of two messages given the public key and the encryptions of the messages but not

If *M* is an additive (semi-)group, then the scheme is called *additively homomorphic* and the algorithms *A* is called *Ad*d Otherwise, the scheme is called *multiplicatively homomorphic* and the

**•** For a homomorphic encryption scheme to be efficient, it is crucial to make sure that the size of the ciphertexts remains polynomially bounded in the security parameter σ during

**•** The security aspects, definitions, and models of homomorphic cryptosystems are the same

If the encryption algorithm *E* gets as additional input a uniform random number *r* of a set , the encryption scheme is called *probabilistic*, otherwise, it is called *deterministic*. Hence, if a cryptosystem is probabilistic, there belong several different ciphertexts to one message depending on the random number *r* ∈ . But note that as before the decryption algorithm remains deterministic, i.e., there is just one message belonging to a given ciphertext. Further‐ more, in a probabilistic, homomorphic cryptosystem the algorithm *A* should be probabilistic too to hide the input ciphertext. For instance, this can be realized by applying a *blinding algorithm* on a (deterministic) computation of the encryption of the product and of the sum

**Notations:** In the following, we will omit the security parameter σ and the public key in the

for *D*(1*<sup>σ</sup>*, *k*, *c*) when there is no possibility of any ambiguity. If the scheme is probabilistic,

more, we will write *A*(*E*(*m*), *E*(*m'*)) = *E*(*m o m'*) to denote that the algorithm *A* (either *Add* or *Mult*) is applied on two encryptions of the messages *m*, *m'* ∈(*M* , *o*) and outputs an encryp‐

(*m*), *Eke*

(*m*'

))) =*m o m*'

(*m*) or *E(m)* for *E*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*) and *Dk* (*c*) or *D(c)*

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

5

(*m*, *<sup>r</sup>*) or *E(m, r)* for *E*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*, *<sup>r</sup>*). Further‐

With respect to the aforementioned definitions, the following points are worth noticing:

*Prob D*(1*<sup>σ</sup>*, *k*, *c*) ≠*m* ≤ 2-*σ*.

the messages themselves.

algorithm *A* is called *Mult*.

repeated computations.

respectively.

tion of *m o m'*

we will also write *Eke*

as those for other cryptosystems.

description of the algorithms. We will write *Eke*

(*m*) or *E(m)* as well as *Eke*

*<sup>D</sup>*(*A*(1*<sup>σ</sup>*, *ke*, *Eke*

, i.e., it holds that except with negligible probability:

A simple consequence of this requirement of the encryption schemes to be preferably proba‐ bilistic appears in the phenomenon called *expansion*. Since, for a plaintext, we require the existence of several possible ciphertexts, the number of ciphertexts is greater than the number of possible plaintexts. This means that the ciphertexts cannot be as short as the plaintexts; they have to be strictly longer. The ratio of the length of the ciphertext and the corresponding plaintext (in bits) is called expansion. The value of this parameter is of paramount importance in determining *security and efficiency tradeoff* of a probabilistic encryption scheme. In Paillier's scheme, an efficient probabilistic encryption mechanism has been proposed with the value of expansion less than 2 (Paillier, 1997). We will see the significance of expansion in other homomorphic encryption systems in the subsequent sections of this chapter.

#### **3. Homomorphic encryption schemes**

During the last few years, homomorphic encryption schemes have been studied extensively since they have become more and more important in many different cryptographic protocols such as, e.g., voting protocols. In this Section, we introduce homomorphic cryptosystems in three steps: *what*, *how* and *why* that reflects the main aspects of this interesting encryption technique. We start by defining *homomorphic cryptosystems* and *algebraically homomorphic cryptosystems*. Then we develop a method to construct algebraically homomorphic schemes given special homomorphic schemes. Finally, we describe applications of homomorphic schemes.

**Definition:** Let the message space (*M*, o) be a finite (semi-)group, and let σ be the security parameter. A *homomorphic public-key encryption scheme* (or *homomorphic cryptosystem*) on *M* is a quadruple (*K*, *E*, *D*, *A*) of probabilistic, expected polynomial time algorithms, satisfying the following functionalities:


*<sup>c</sup>* <sup>=</sup>*E*(11*<sup>σ</sup>*, *ke*, *<sup>m</sup>*) then *Prob <sup>D</sup>*(1*<sup>σ</sup>*, *<sup>k</sup>*, *<sup>c</sup>*) <sup>≠</sup>*m* is negligible, i.e., it holds that *Prob D*(1*<sup>σ</sup>*, *k*, *c*) ≠*m* ≤ 2-*σ*.

pseudo-random generator for stream ciphers, or in the operating mode for block ciphers) – generally called *initial vector* (IV). This vector may be public and it may be transmitted in a clear-text form. However, the IV must be changed every time we encrypt a message. In case of asymmetric ciphers, the security analysis is more mathematical and formal, and we want the randomized schemes to remain analyzable in the same way as the deterministic schemes. Researchers have proposed some models to randomize the existing deterministic schemes, as the *optimal asymmetric encryption padding* (OAEP) for RSA (or any scheme that is based on a trapdoor one-way permutation) (Bellare & Rogaway, 1995). In the literature, researchers have also proposed some other randomized schemes (ElGamal, 1985; Goldwasser & Micali, 1982;

4 Theory and Practice of Cryptography and Network Security Protocols and Technologies

A simple consequence of this requirement of the encryption schemes to be preferably proba‐ bilistic appears in the phenomenon called *expansion*. Since, for a plaintext, we require the existence of several possible ciphertexts, the number of ciphertexts is greater than the number of possible plaintexts. This means that the ciphertexts cannot be as short as the plaintexts; they have to be strictly longer. The ratio of the length of the ciphertext and the corresponding plaintext (in bits) is called expansion. The value of this parameter is of paramount importance in determining *security and efficiency tradeoff* of a probabilistic encryption scheme. In Paillier's scheme, an efficient probabilistic encryption mechanism has been proposed with the value of expansion less than 2 (Paillier, 1997). We will see the significance of expansion in other

During the last few years, homomorphic encryption schemes have been studied extensively since they have become more and more important in many different cryptographic protocols such as, e.g., voting protocols. In this Section, we introduce homomorphic cryptosystems in three steps: *what*, *how* and *why* that reflects the main aspects of this interesting encryption technique. We start by defining *homomorphic cryptosystems* and *algebraically homomorphic cryptosystems*. Then we develop a method to construct algebraically homomorphic schemes given special homomorphic schemes. Finally, we describe applications of homomorphic

**Definition:** Let the message space (*M*, o) be a finite (semi-)group, and let σ be the security parameter. A *homomorphic public-key encryption scheme* (or *homomorphic cryptosystem*) on *M* is a quadruple (*K*, *E*, *D*, *A*) of probabilistic, expected polynomial time algorithms, satisfying the

**• Key Generation:** On input 1<sup>σ</sup> the algorithm *K* outputs an encryption/decryption key pair

**• Encryption:** On inputs 1σ, *ke*, and an element *m* ∈*M* the encryption algorithm E outputs a

**• Decryption:** The decryption algorithm *D* is deterministic. On inputs 1σ, *k*, and an element *c* ∈*C* it outputs an element in the message space *M* so that for all *m* ∈*M* it holds : if

homomorphic encryption systems in the subsequent sections of this chapter.

**3. Homomorphic encryption schemes**

(*ke*, *kd* ) =*k* ∈, where denotes the key space.

ciphertext *c* ∈*C* , where *C* denotes the ciphertext space.

Blum & Goldwasser, 1985).

schemes.

following functionalities:

**• Homomorphic Property:** *<sup>A</sup>* is an algorithm that on inputs 1*σ*, *ke* , and elements *c*1, *<sup>c</sup>*<sup>2</sup> <sup>∈</sup>*<sup>C</sup>* outputs an element *c*<sup>3</sup> ∈*C* so that for all *m*1, *m*<sup>2</sup> ∈*M* it holds: if *m*<sup>3</sup> =*m*<sup>1</sup> *o m*<sup>2</sup> and *<sup>c</sup>*<sup>1</sup> <sup>=</sup> *<sup>E</sup>*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*1), and *c*<sup>2</sup> <sup>=</sup> *<sup>E</sup>*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*2), then *Prob <sup>D</sup>*(*A*(1*<sup>σ</sup>*, *ke*, *<sup>c</sup>*1, *<sup>c</sup>*2)) <sup>≠</sup> *<sup>m</sup>*3 is negligible.

Informally speaking, a homomorphic cryptosystem is a cryptosystem with the additional property that there exists an efficient algorithm to compute an encryption of the sum or the product, of two messages given the public key and the encryptions of the messages but not the messages themselves.

If *M* is an additive (semi-)group, then the scheme is called *additively homomorphic* and the algorithms *A* is called *Ad*d Otherwise, the scheme is called *multiplicatively homomorphic* and the algorithm *A* is called *Mult*.

With respect to the aforementioned definitions, the following points are worth noticing:


If the encryption algorithm *E* gets as additional input a uniform random number *r* of a set , the encryption scheme is called *probabilistic*, otherwise, it is called *deterministic*. Hence, if a cryptosystem is probabilistic, there belong several different ciphertexts to one message depending on the random number *r* ∈ . But note that as before the decryption algorithm remains deterministic, i.e., there is just one message belonging to a given ciphertext. Further‐ more, in a probabilistic, homomorphic cryptosystem the algorithm *A* should be probabilistic too to hide the input ciphertext. For instance, this can be realized by applying a *blinding algorithm* on a (deterministic) computation of the encryption of the product and of the sum respectively.

**Notations:** In the following, we will omit the security parameter σ and the public key in the description of the algorithms. We will write *Eke* (*m*) or *E(m)* for *E*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*) and *Dk* (*c*) or *D(c)* for *D*(1*<sup>σ</sup>*, *k*, *c*) when there is no possibility of any ambiguity. If the scheme is probabilistic, we will also write *Eke* (*m*) or *E(m)* as well as *Eke* (*m*, *<sup>r</sup>*) or *E(m, r)* for *E*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*, *<sup>r</sup>*). Further‐ more, we will write *A*(*E*(*m*), *E*(*m'*)) = *E*(*m o m'*) to denote that the algorithm *A* (either *Add* or *Mult*) is applied on two encryptions of the messages *m*, *m'* ∈(*M* , *o*) and outputs an encryp‐ tion of *m o m'* , i.e., it holds that except with negligible probability:

$$D\left(A\left(1^{\sigma},\;k\_{\sigma}\;\;E\_{k\_{\varepsilon}}\left(m\right),\;E\_{k\_{\varepsilon}}\left(m\;\;l\right)\right)\right) = m \text{ o } m^{\top}$$

**Example:** In the following, we give an example of a deterministic multiplicatively homomor‐ phic scheme and an example of a probabilistic, additively homomorphic scheme.

A public-key homomorphic encryption scheme on a (semi-)ring (*M*, +,.) can be defined in a similar manner. Such schemes consist of two algorithms: *Add* and *Mult* for the homomorphic property instead of one algorithm for *A*, i.e., it is additively and multiplicatively homomorphic

**Definition:** An additively homomorphic encryption scheme on a (semi-)ring (*M*, +,.) is called *scalar homomorphic* if there exists a probabilistic, expected polynomial time algorithm *Mixed\_Mult* that on inputs 1*σ*, *ke*, *<sup>s</sup>* <sup>∈</sup>*<sup>M</sup>* and an element *<sup>c</sup>* <sup>∈</sup>*<sup>C</sup>* outputs an element *<sup>c</sup> '* <sup>∈</sup>*C* so

Thus in a scalar homomorphic scheme, it is possible to compute an encryption

*ke* and an encryption *<sup>c</sup>* <sup>=</sup>*E*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*) of one message *<sup>m</sup>* and the other message *<sup>s</sup>* as a plaintext. It is clear that any scheme that is algebraically homomorphic is scalar homomorphic as well. We will denote by *Mixed* \_*Mult*(*m*, *E*(*m'*)) = *E*(*mm'*) if the following equation holds good

**Definition:** A *blinding algorithm* is a probabilistic, polynomial-time algorithm which on inputs

∈ is chosen uniformly at random.

For instance, in a probabilistic, homomorphic cryptosystem on (*M*, o) the blinding algorithm can be realized by applying the algorithm *A* on the ciphertext *c* and an encryption of the identity

If *M* is isomorphic to ℤ / *n*ℤ if M is finite or to ℤ otherwise, then the algorithm *Mixed\_Mult* can easily be implemented using a double and *Add* algorithm. This is combined with a blinding algorithm is the scheme is probabilistic (Cramer et al., 2000). Hence, every additively homo‐ morphic cryptosystem on ℤ / *n*ℤ or ℤ is also *scalar homomorphic* and the algorithm *Mixed\_Mult*

**Algebraically Homomorphic Cryptosystems:** The existence of an efficient and secure algebraically homomorphic cryptosystem has been a long standing open question. In this Section, we first present some related work considering this problem. Thereafter, we describe the relationship between algebraically homomorphic schemes and homomorphic schemes on special non-abelian groups. More precisely, we prove that a homomorphic encryption scheme on the non-ableain group **(S7**,.), the symmetric group on seven elements, allows to construct an algebraically homomorphic encryption scheme on (**F2, +**,.). An algebraically homomorphic encryption scheme on (**F2, +**,.) can also be obtained from a homomorphic encryption scheme on the special linear group (*SL***(3, 2)**,.) over **F2**. Furthermore, using coding theory, an algebra‐

*<sup>D</sup>*(*Mixed* \_*Mult*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*, *Eke*

<sup>=</sup>*s*.*m* and *<sup>c</sup>* <sup>=</sup>*E*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*) then the probability

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

7

) of a product of two messages *s*, *m* ∈*M* given the public key

)) = *m* . *m*'

(*m*'

(*m*, *r*) where *r* ∈ is randomly chosen outputs another encryption

at the same time. Such schemes are called *algebraically homomorphic*.

that for all *m* ∈*M* it holds that: if *m'*

) of *m* where *r '*

*<sup>E</sup>*(1*<sup>σ</sup>*, *ke*, *<sup>s</sup>*.*m*) <sup>=</sup> *<sup>E</sup>*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>'*

<sup>1</sup>*σ*, *ke*, and *<sup>c</sup>* <sup>∈</sup>*Eke*

(*m*, *r '*

element in *M*.

*c '* ∈*Eke*

*Prob <sup>D</sup>*(*Mixed* \_*Mult*(1*<sup>σ</sup>*, *ke*, *<sup>s</sup>*, *<sup>s</sup>*)) <sup>≠</sup> *<sup>m</sup>'* is negligible.

except possibly with a negligible probability of not holding.

can be efficiently implemented (Sander & Tschudin, 1998).

**The RSA Scheme:** The classical RSA scheme (Rivest et al., 1987b) is an example of a deter‐ ministic multiplicatively homomorphic cryptosystem on *M* =(ℤ / *N* ℤ, .), where N is the product of two large primes. As ciphertext space, we have *C* =(ℤ / *N* ℤ, .) and as key space we have ={(*ke*, *kd* ) =((*N* , *e*), *d*)| *N* = *pq*, *ed* ≡1 *mod φ*(*N* )}. The encryption of a message *m* ∈*M* is defined as *Eke* (*m*)= *<sup>m</sup><sup>e</sup> mod <sup>N</sup>* for decryption of a ciphertext *Eke* (*m*)= *c* ∈*C* we compute *Dke*, *kd* (*c*)= *c <sup>d</sup> mod N* =*m mod N* . Obviously, the encryption of the product of two messages can be efficiently computed by multiplying the corresponding ciphertexts, i.e.,

$$E\_{k\_\varepsilon}(m\_1.m\_2) = \{m\_1.m\_2\}^{\varepsilon} \\ \text{mod } N = \{m\_1^{\varepsilon} \bmod n\} \{m\_2^{\varepsilon} \bmod N\} \\ = E\_{k\_\varepsilon}(m\_1).E\_{k\_\varepsilon}(m\_2)$$

where *m*1, *m*2∈*M* . Therefore, the algorithm for *Mult* can be easiliy realized as follows:

$$\operatorname{Mult}\left(E\_{k\_\varepsilon}(m\_1)\_\prime \mid E\_{k\_\varepsilon}(m\_2)\right) = E\_{k\_\varepsilon}(m\_1)\_\prime \operatorname{E}\_{k\_\varepsilon}(m\_2)\_\prime$$

Usually in the RSA scheme as well as in most of the cryptosystems which are based on the difficulty of factoring the security parameter σ is the bit length of *N*. For instance, σ = 1024 is a common security parameter.

**The Goldwasser-Micali Scheme:** The Goldwasser-Micali scheme (Goldwasser & Micali, 1984) is an example of a probabilistic, additively homomorphic cryptosystem on *M* =(ℤ / 2ℤ, + ) with the ciphtertext space *C* =*Z* =(ℤ / *N* ℤ)\* where *N* = *pq* is the product of two large primes. We have.

$$\mathcal{K} = \left\{ \begin{pmatrix} k\_{e'} & k\_d \end{pmatrix} = \left( \begin{pmatrix} N \ \vdots \ a \end{pmatrix}, \begin{pmatrix} p \ \vdots \ q \end{pmatrix} \right) \right\} \text{ N} = pq, \quad a \in \left\{ \mathbb{Z} / N \mathbb{Z} \mathbb{Z} \right\}^\* : \left( \frac{a}{p} \right) = \left( \frac{a}{q} \right) = \dots \text{ 1} \right\}$$

Since this scheme is probabilistic, the encryption algorithm gets as additional input a random value *r* ∈. We define *Eke* (*m*, *<sup>r</sup>*)=*<sup>a</sup> mr* <sup>2</sup> *mod <sup>N</sup>* and *D*(*ke kd* ) =0 if *<sup>c</sup>* is a square and = 1 otherwise. The following relation therefore holds good:

$$E\_{k\_\varepsilon}(m\_{1\prime}\ m\_1).E\_{k\_\varepsilon}(m\_{2\prime}\ m\_2) = E\_{k\_\varepsilon}(m\_1 + m\_{2\prime}\ m\_1 r\_2).$$

The algorithms *Add* can, therefore, be efficiently implemented as follows:

$$\text{Add}\begin{pmatrix} E\_{k\_r} \begin{pmatrix} m\_{1'} & r\_1 \end{pmatrix} & E\_{k\_r} \begin{pmatrix} m\_{2'} & r\_2 \end{pmatrix} & r\_3 \end{pmatrix} = E\_{k\_r} \begin{pmatrix} m\_{1'} & r\_1 \end{pmatrix} . \ E\_{k\_r} \begin{pmatrix} m\_{2'} & r\_2 \end{pmatrix} . \ r\_3^2 \text{ mod } \mathcal{N} = E\_{k\_r} \begin{pmatrix} m\_1 + m\_2 & r\_1 r\_2 r\_3 \end{pmatrix}$$

In the above equation, *r*<sup>3</sup> <sup>2</sup> *mod <sup>N</sup>* is equivalent to *Eke* (0, *r*3). Also, *m*1, *m*2∈*M* and *r*1, *r*2,*r*<sup>3</sup> ∈ *Z*. Note that this algorithm should be probabilistic, since it obtains a random number *r*3 as an additional input.

A public-key homomorphic encryption scheme on a (semi-)ring (*M*, +,.) can be defined in a similar manner. Such schemes consist of two algorithms: *Add* and *Mult* for the homomorphic property instead of one algorithm for *A*, i.e., it is additively and multiplicatively homomorphic at the same time. Such schemes are called *algebraically homomorphic*.

**Example:** In the following, we give an example of a deterministic multiplicatively homomor‐

**The RSA Scheme:** The classical RSA scheme (Rivest et al., 1987b) is an example of a deter‐ ministic multiplicatively homomorphic cryptosystem on *M* =(ℤ / *N* ℤ, .), where N is the product of two large primes. As ciphertext space, we have *C* =(ℤ / *N* ℤ, .) and as key space we have ={(*ke*, *kd* ) =((*N* , *e*), *d*)| *N* = *pq*, *ed* ≡1 *mod φ*(*N* )}. The encryption of a message *m* ∈*M*

(*c*)= *c <sup>d</sup> mod N* =*m mod N* . Obviously, the encryption of the product of two messages can

*<sup>e</sup> mod <sup>n</sup>*)(*m*<sup>2</sup>

(*m*2)) = *Eke*

Usually in the RSA scheme as well as in most of the cryptosystems which are based on the difficulty of factoring the security parameter σ is the bit length of *N*. For instance, σ = 1024 is

**The Goldwasser-Micali Scheme:** The Goldwasser-Micali scheme (Goldwasser & Micali, 1984) is an example of a probabilistic, additively homomorphic cryptosystem on

Since this scheme is probabilistic, the encryption algorithm gets as additional input a random

(*m*2, *r*2) = *Eke*

(*m*1, *r*1). *Eke*

Note that this algorithm should be probabilistic, since it obtains a random number *r*3 as an

*<sup>e</sup> mod <sup>N</sup>* ) <sup>=</sup>*Eke*

(*m*2)

*<sup>q</sup>* ) = - 1}

(*m*1). *Eke*

 : ( *<sup>a</sup> <sup>p</sup>* ) <sup>=</sup> ( *<sup>a</sup>*

(*m*, *<sup>r</sup>*)=*<sup>a</sup> mr* <sup>2</sup> *mod <sup>N</sup>* and *D*(*ke kd* ) =0 if *<sup>c</sup>* is a square and = 1 otherwise.

(*m*<sup>1</sup> + *m*2, *r*1*r*2)

(*m*2, *r*2). *r*<sup>3</sup>

(*m*1). *Eke*

(*m*)= *c* ∈*C* we compute

(*m*2)

where *N* = *pq* is the product of two

<sup>2</sup> *mod <sup>N</sup>* <sup>=</sup> *Eke*(*m*<sup>1</sup> <sup>+</sup> *<sup>m</sup>*2, *<sup>r</sup>*1*r*2*r*3)

(0, *r*3). Also, *m*1, *m*2∈*M* and *r*1, *r*2,*r*<sup>3</sup> ∈ *Z*.

phic scheme and an example of a probabilistic, additively homomorphic scheme.

6 Theory and Practice of Cryptography and Network Security Protocols and Technologies

(*m*)= *<sup>m</sup><sup>e</sup> mod <sup>N</sup>* for decryption of a ciphertext *Eke*

where *m*1, *m*2∈*M* . Therefore, the algorithm for *Mult* can be easiliy realized as follows:

(*m*1), *Eke*

be efficiently computed by multiplying the corresponding ciphertexts, i.e.,

(*m*1.*m*2) =(*m*1.*m*2)*emod N* =(*m*<sup>1</sup>

*Mult*(*Eke*

*M* =(ℤ / 2ℤ, + ) with the ciphtertext space *C* =*Z* =(ℤ / *N* ℤ)\*

*K* ={(*k <sup>e</sup>*, *kd* ) =((*N* , *a*), (*p*, *q*))| *N* = *pq*, *a* ∈(ℤ / *N* ℤ)\*

*Eke*

(*m*1, *r*1). *Eke*

(*m*2, *r*2), *r*3) = *Eke*

The algorithms *Add* can, therefore, be efficiently implemented as follows:

<sup>2</sup> *mod <sup>N</sup>* is equivalent to *Eke*

The following relation therefore holds good:

is defined as *Eke*

*Eke*

a common security parameter.

large primes. We have.

value *r* ∈. We define *Eke*

(*m*1, *r*1), *Eke*

In the above equation, *r*<sup>3</sup>

*Add*(*Eke*

additional input.

*Dke*, *kd*

**Definition:** An additively homomorphic encryption scheme on a (semi-)ring (*M*, +,.) is called *scalar homomorphic* if there exists a probabilistic, expected polynomial time algorithm *Mixed\_Mult* that on inputs 1*σ*, *ke*, *<sup>s</sup>* <sup>∈</sup>*<sup>M</sup>* and an element *<sup>c</sup>* <sup>∈</sup>*<sup>C</sup>* outputs an element *<sup>c</sup> '* <sup>∈</sup>*C* so that for all *m* ∈*M* it holds that: if *m'* <sup>=</sup>*s*.*m* and *<sup>c</sup>* <sup>=</sup>*E*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*) then the probability *Prob <sup>D</sup>*(*Mixed* \_*Mult*(1*<sup>σ</sup>*, *ke*, *<sup>s</sup>*, *<sup>s</sup>*)) <sup>≠</sup> *<sup>m</sup>'* is negligible.

Thus in a scalar homomorphic scheme, it is possible to compute an encryption *<sup>E</sup>*(1*<sup>σ</sup>*, *ke*, *<sup>s</sup>*.*m*) <sup>=</sup> *<sup>E</sup>*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>'* ) of a product of two messages *s*, *m* ∈*M* given the public key *ke* and an encryption *<sup>c</sup>* <sup>=</sup>*E*(1*<sup>σ</sup>*, *ke*, *<sup>m</sup>*) of one message *<sup>m</sup>* and the other message *<sup>s</sup>* as a plaintext. It is clear that any scheme that is algebraically homomorphic is scalar homomorphic as well.

We will denote by *Mixed* \_*Mult*(*m*, *E*(*m'*)) = *E*(*mm'*) if the following equation holds good except possibly with a negligible probability of not holding.

$$D\left(\text{Mixed\\_Mult}\left(1^{\sigma},\ k\_{e'}\ m,\ m'\ \to\_{k\_e} \left(m'\right)\right)\right) = m\ .\ m'\ .$$

**Definition:** A *blinding algorithm* is a probabilistic, polynomial-time algorithm which on inputs <sup>1</sup>*σ*, *ke*, and *<sup>c</sup>* <sup>∈</sup>*Eke* (*m*, *r*) where *r* ∈ is randomly chosen outputs another encryption *c '* ∈*Eke* (*m*, *r '* ) of *m* where *r '* ∈ is chosen uniformly at random.

For instance, in a probabilistic, homomorphic cryptosystem on (*M*, o) the blinding algorithm can be realized by applying the algorithm *A* on the ciphertext *c* and an encryption of the identity element in *M*.

If *M* is isomorphic to ℤ / *n*ℤ if M is finite or to ℤ otherwise, then the algorithm *Mixed\_Mult* can easily be implemented using a double and *Add* algorithm. This is combined with a blinding algorithm is the scheme is probabilistic (Cramer et al., 2000). Hence, every additively homo‐ morphic cryptosystem on ℤ / *n*ℤ or ℤ is also *scalar homomorphic* and the algorithm *Mixed\_Mult* can be efficiently implemented (Sander & Tschudin, 1998).

**Algebraically Homomorphic Cryptosystems:** The existence of an efficient and secure algebraically homomorphic cryptosystem has been a long standing open question. In this Section, we first present some related work considering this problem. Thereafter, we describe the relationship between algebraically homomorphic schemes and homomorphic schemes on special non-abelian groups. More precisely, we prove that a homomorphic encryption scheme on the non-ableain group **(S7**,.), the symmetric group on seven elements, allows to construct an algebraically homomorphic encryption scheme on (**F2, +**,.). An algebraically homomorphic encryption scheme on (**F2, +**,.) can also be obtained from a homomorphic encryption scheme on the special linear group (*SL***(3, 2)**,.) over **F2**. Furthermore, using coding theory, an algebra‐ ically homomorphic encryption on an arbitrary finite ring or field could be obtained given a homomorphic encryption scheme on one of these non-abelian groups. These observations could be a first step to solve the problem whether efficient and secure algebraically homo‐ morphic schemes exist. The research community in cryptography has spent substantial effort on this problem. In 1996, Boneh and Lipton proved that under a reasonable assumption every deterministic, algebraically homomorphic cryptosystem can be broken in sub-exponential time (Boneh & Lipton, 1996). This may be perceived as a negative result concerning the existence of an algebraically homomorphic encryption scheme, although most of the existing cryptosystems, e.g., RSA scheme or the ElGamal scheme can be also be broken in subexponential time. Furthermore, if we seek for algebraically homomorphic public-key schemes on small fields or rings such as *M* = *F2*, obviously such a scheme has to be probabilistic in order to be secure.

be represented as a subset of {0, 1}*<sup>l</sup>*

morphic encryption scheme (*K*

subgroup of **S7**. This proves the claim.

the obvious way.

on encrypted data.

, where e.g. *l* = 21 can be chosen, and let *C* be a circuit with

*ult*) on **S7** by defining *<sup>E</sup>*˜(*m*)=(*E*(*s*0), ….*E*(*sl*-1))

Homomorphic Encryption — Theory and Application

*ult* is constructed by substituting

http://dx.doi.org/10.5772/56687

9

, .) where *N* is an RSA

addition and multiplication gates that takes as inputs the binary representations of elements *m*1, *m*2∈*S*7 and outputs the binary representations of *m*1*m*2. If we have an algebraically homomorphic encryption scheme (*K, E, D, Add, Mult*) on (**F2, +**,.) then we can define a homo‐

the addition gates in *C* by *Add* and the multiplication gates by *Mult*. *<sup>K</sup>*˜ and *D*˜ are defined in

**2 → 1:** The proof has two steps. First, we use a construction of Ben-Or and Cleve (Ben-Or & Cleve, 1992) to show that the field (**F2, +**,.) can be encoded in the special linear group (**SL**(3,2),.) over **F2**. Then, we apply a theorem from projective geometry to show that (**SL**(3,2),.) is a

Homomorphic encryption schemes on groups have been extensively studied. For instance, we have homomorphic schemes on groups (ℤ / *M* ℤ, +), for *M* being a smooth number (Gold‐ wasser & Micali, 1984; Benaloh, 1994; Naccache & Stern, 1998) for *M* = *p.q* being an RSA

modulus. All known efficient and secure schemes are homomorphic on abelian groups. However, *S7* and *SL*(3, 2) are non-abelian. Sander, Young and Yung (Sander et al., 1999) investigated the possibility of existence of a homomorphic encryption scheme on non-abelain groups. Although non-abelian groups had been used to construct encryption schemes (Ko et al., 2000; Paeng et al., 2001; Wagner & Magyarik, 1985; Grigoriev & Ponomarenko, 2006), the resulting schemes are not homomorphic in the sense that we need for computing efficiently

Grigoriev and Ponomarenko propose a novel definition of homomorphic cryptosystems on which they base a method to construct homomorphic cryptosystems over arbitrary finite groups including non-abelian groups (Grigoriev & Ponomarenko, 2006). Their construction method is based on the fact that every finite group is an epimorphic image of a free product of finite cyclic groups. It uses existing homomorphic encryption schemes on finite cyclic groups as building blocks to obtain homomorphic encryption schemes on arbitrary finite groups. Since the ciphertext space obtained from the encryption scheme is a free product of groups, an exponential blowup of the ciphertext lengths during repeated computations is produced as a result. The reason is that the length of the product of two elements *x* and *y* of a free product is, in general, the sum of the length of *x* and the length of *y*. Hence, the technique proposed by Grigoriev and Ponomarenko suffers from the same drawback as the earlier schemes and does not provide an efficient cryptosystem. We note that using this construction it is possible to construct a homomorphic encryption scheme on the symmetric group **S7** and on the special linear group *SL*(3, 2). If we combine this with **Theorem 1**, we can construct an algebraically homomorphic cryptosystem on the finite field (**F2, +**,.). Unfortunately, the exponential blowup owing to the construction method in the homomorphic encryption scheme on S7 and on *SL*(3, 2) respectively, would lead to an exponential blowup in **F2** and hence leaves the question open

˜ , *E* ˜ , *D*˜ , *M*˜

modulus (Paillier, 1999; Galbraith, 2002), and for groups ((<sup>ℤ</sup> / *<sup>N</sup>* <sup>ℤ</sup>) *\**

where (*s*0,… …..*sl*-1) denotes the binary representation of *m*. *<sup>M</sup>*˜

Some researchers also tried to find candidates for algebraically homomorphic schemes. In 1993, Fellows and Koblitz presented an algebraic public-key cryptosystem called Polly Cracker (Fellows & Koblitz, 1993). It is algebraically homomorphic and provably secure. Unfortunately, the scheme has a number of difficulties and is not efficient concerning the ciphertext length. Firstly, Polly Cracker is a polynomial-based system. Therefore, computing an encryption of the product *E*(*m*1.*m*2) of two messages *m*1 and *m*<sup>2</sup> by multiplying the corresponding ciphertext polynomials *E*(*m*1) and *E*(*m*2), leads to an exponential blowup in the number of monomials. Hence, during repeated computations, there is an exponential blow up in the ciphertext length. Secondly, all existing instantiations of Polly Cracker suffer from further drawbacks (Koblitz, 1998). They are either insecure since they succumb to certain attacks, they are too inefficient to be practical, or they lose the algebraically homomorphic property. Hence, it is far from clear how such kind of schemes could be turned into efficient and secure algebraically homomorphic encryption schemes. A detailed analysis and description of these schemes can be found in (Ly, 2002).

In 2002, J. Domingo-Ferrer developed a probabilistic, algebraically homomorphic secret-key cryptosystem (Domingo-Ferrer, 2002). However, this scheme was not efficient since there was an exponential blowup in the ciphertext length during repeated multiplications that were required to be performed. Moreover, it was also broken by Wagner and Bao (Bao, 2003; Wagner, 2003).

Thus considering homomorphic encryption schemes on groups instead of rings seems more promising to design a possible algebraically homomorphic encryption scheme. It brings us closer to structures that have been successfully used in cryptography. The following theorem shows that indeed the search for algebraically homomorphic schemes can be reduced to the search for homomorphic schemes on special non-abelian groups (Rappe, 2004).

**Theorem I:** The following two statements are equivalent: (1) There exists an algebraically homomorphic encryption scheme on (F2, +,.). (2) There exists a homomorphic encryption scheme on the symmetric group (S7,.).

**Proof: 1 → 2:** This direction of proof follows immediately and it holds for an arbitrary finite group since operations of finite groups can always be implemented by Boolean circuits. Let *S7* be represented as a subset of {0, 1}*<sup>l</sup>* , where e.g. *l* = 21 can be chosen, and let *C* be a circuit with addition and multiplication gates that takes as inputs the binary representations of elements *m*1, *m*2∈*S*7 and outputs the binary representations of *m*1*m*2. If we have an algebraically homomorphic encryption scheme (*K, E, D, Add, Mult*) on (**F2, +**,.) then we can define a homo‐ morphic encryption scheme (*K* ˜ , *E* ˜ , *D*˜ , *M*˜ *ult*) on **S7** by defining *<sup>E</sup>*˜(*m*)=(*E*(*s*0), ….*E*(*sl*-1)) where (*s*0,… …..*sl*-1) denotes the binary representation of *m*. *<sup>M</sup>*˜ *ult* is constructed by substituting the addition gates in *C* by *Add* and the multiplication gates by *Mult*. *<sup>K</sup>*˜ and *D*˜ are defined in the obvious way.

ically homomorphic encryption on an arbitrary finite ring or field could be obtained given a homomorphic encryption scheme on one of these non-abelian groups. These observations could be a first step to solve the problem whether efficient and secure algebraically homo‐ morphic schemes exist. The research community in cryptography has spent substantial effort on this problem. In 1996, Boneh and Lipton proved that under a reasonable assumption every deterministic, algebraically homomorphic cryptosystem can be broken in sub-exponential time (Boneh & Lipton, 1996). This may be perceived as a negative result concerning the existence of an algebraically homomorphic encryption scheme, although most of the existing cryptosystems, e.g., RSA scheme or the ElGamal scheme can be also be broken in subexponential time. Furthermore, if we seek for algebraically homomorphic public-key schemes on small fields or rings such as *M* = *F2*, obviously such a scheme has to be probabilistic in order

8 Theory and Practice of Cryptography and Network Security Protocols and Technologies

Some researchers also tried to find candidates for algebraically homomorphic schemes. In 1993, Fellows and Koblitz presented an algebraic public-key cryptosystem called Polly Cracker (Fellows & Koblitz, 1993). It is algebraically homomorphic and provably secure. Unfortunately, the scheme has a number of difficulties and is not efficient concerning the ciphertext length. Firstly, Polly Cracker is a polynomial-based system. Therefore, computing an encryption of the product *E*(*m*1.*m*2) of two messages *m*1 and *m*<sup>2</sup> by multiplying the corresponding ciphertext polynomials *E*(*m*1) and *E*(*m*2), leads to an exponential blowup in the number of monomials. Hence, during repeated computations, there is an exponential blow up in the ciphertext length. Secondly, all existing instantiations of Polly Cracker suffer from further drawbacks (Koblitz, 1998). They are either insecure since they succumb to certain attacks, they are too inefficient to be practical, or they lose the algebraically homomorphic property. Hence, it is far from clear how such kind of schemes could be turned into efficient and secure algebraically homomorphic encryption schemes. A detailed analysis and description of these schemes can be found in (Ly,

In 2002, J. Domingo-Ferrer developed a probabilistic, algebraically homomorphic secret-key cryptosystem (Domingo-Ferrer, 2002). However, this scheme was not efficient since there was an exponential blowup in the ciphertext length during repeated multiplications that were required to be performed. Moreover, it was also broken by Wagner and Bao (Bao, 2003;

Thus considering homomorphic encryption schemes on groups instead of rings seems more promising to design a possible algebraically homomorphic encryption scheme. It brings us closer to structures that have been successfully used in cryptography. The following theorem shows that indeed the search for algebraically homomorphic schemes can be reduced to the

**Theorem I:** The following two statements are equivalent: (1) There exists an algebraically homomorphic encryption scheme on (F2, +,.). (2) There exists a homomorphic encryption

**Proof: 1 → 2:** This direction of proof follows immediately and it holds for an arbitrary finite group since operations of finite groups can always be implemented by Boolean circuits. Let *S7*

search for homomorphic schemes on special non-abelian groups (Rappe, 2004).

to be secure.

2002).

Wagner, 2003).

scheme on the symmetric group (S7,.).

**2 → 1:** The proof has two steps. First, we use a construction of Ben-Or and Cleve (Ben-Or & Cleve, 1992) to show that the field (**F2, +**,.) can be encoded in the special linear group (**SL**(3,2),.) over **F2**. Then, we apply a theorem from projective geometry to show that (**SL**(3,2),.) is a subgroup of **S7**. This proves the claim.

Homomorphic encryption schemes on groups have been extensively studied. For instance, we have homomorphic schemes on groups (ℤ / *M* ℤ, +), for *M* being a smooth number (Gold‐ wasser & Micali, 1984; Benaloh, 1994; Naccache & Stern, 1998) for *M* = *p.q* being an RSA modulus (Paillier, 1999; Galbraith, 2002), and for groups ((<sup>ℤ</sup> / *<sup>N</sup>* <sup>ℤ</sup>) *\** , .) where *N* is an RSA modulus. All known efficient and secure schemes are homomorphic on abelian groups. However, *S7* and *SL*(3, 2) are non-abelian. Sander, Young and Yung (Sander et al., 1999) investigated the possibility of existence of a homomorphic encryption scheme on non-abelain groups. Although non-abelian groups had been used to construct encryption schemes (Ko et al., 2000; Paeng et al., 2001; Wagner & Magyarik, 1985; Grigoriev & Ponomarenko, 2006), the resulting schemes are not homomorphic in the sense that we need for computing efficiently on encrypted data.

Grigoriev and Ponomarenko propose a novel definition of homomorphic cryptosystems on which they base a method to construct homomorphic cryptosystems over arbitrary finite groups including non-abelian groups (Grigoriev & Ponomarenko, 2006). Their construction method is based on the fact that every finite group is an epimorphic image of a free product of finite cyclic groups. It uses existing homomorphic encryption schemes on finite cyclic groups as building blocks to obtain homomorphic encryption schemes on arbitrary finite groups. Since the ciphertext space obtained from the encryption scheme is a free product of groups, an exponential blowup of the ciphertext lengths during repeated computations is produced as a result. The reason is that the length of the product of two elements *x* and *y* of a free product is, in general, the sum of the length of *x* and the length of *y*. Hence, the technique proposed by Grigoriev and Ponomarenko suffers from the same drawback as the earlier schemes and does not provide an efficient cryptosystem. We note that using this construction it is possible to construct a homomorphic encryption scheme on the symmetric group **S7** and on the special linear group *SL*(3, 2). If we combine this with **Theorem 1**, we can construct an algebraically homomorphic cryptosystem on the finite field (**F2, +**,.). Unfortunately, the exponential blowup owing to the construction method in the homomorphic encryption scheme on S7 and on *SL*(3, 2) respectively, would lead to an exponential blowup in **F2** and hence leaves the question open if an efficient algebraically homomorphic cryptosystem on F2 exists. We will come back to this issue in Section 6, where we discuss *fully homomorphic encryption schemes*.

elements that are invertible modulo *n* with a Jacobi symbol, with respect to a fixed factor *n*, equal to 1. With these settings of parameters, it is possible to split *G* into two parts – *H* and *G* \*H*. The generalization schemes of GM deal with these two groups. These schemes attempt to

**Benaloh's scheme:** Benaloh (Benaloh, 1988) is a generalization of GM scheme that enables one to manage inputs of *l*(*k*) bits, *k* being a prime satisfying some specified constraints. Encryption is similar as in GM scheme (encrypting a message *m* ∈{0, …., *k* - 1} is tantamount to picking

complex. If the input and output sizes are *l*(*k*) and *l*(*n*) bits respectively, the expansion is equal to *l*(*n*) / *l*(*k*). The value of expansion obtained in this approach is less than that achieved in GM. This makes the scheme more attractive. Moreover, the encryption is not too expensive as well. The overhead in the decryption process is estimated to be *O*( *k*.*l*(*k*)) for pre-computation which remains constant for each dynamic decryption step. This implies that the value of *k* has to be taken very small, which in turn limits the gain obtained on the value of expansion.

**Naccache-Stern scheme:** This scheme (Naccache & Stern, 1998) is an improvement of Benaloh's scheme. Using a value of the parameter *k* that is greater than that used in the Benaloh's scheme, it achieves a smaller expansion and thereby attains a superior efficiency. The encryption step is precisely the same as in Benaloh's scheme. However, decryption is different. The value of expansion is same as that in Benaloh's scheme, i.e., *l*(*n*) / *l*(*k*). However, the cost of decryption

of the parameters in the system in such a way that the achieved value of expansion is 4

**Okamoto-Uchiyama scheme:** To improve the performance of the earlier schemes on homo‐ morphic encryption, Okamoto and Uchiyama changed the base group *G* (Okamoto & Uchiya‐

of the biggest advantages of this scheme is that its security is equivalent to the factorization of *n*. However, a chosen-ciphertext attack has been proposed on this scheme that can break the factorization problem. Hence, currently it has a limited applicability. However, this scheme was used to design the EPOC systems (Okamoto et al., 2000) which is accepted in the *IEEE*

**Paillier scheme:** One of the most well-known homomorphic encryption schemes is due to Paillier (Paillier, 1999). It is an improvement over the earlier schemes in the sense that it is able to decrease the value of expansion from 3 to 2. The scheme uses *n* = *p*.*q* with gcd (*n*, *ϕ*(*n*))=1. As usual *p* and *q* are two large primes. However, it considered the group

modulo *n*. This makes decryption a bit heavyweight process. The author has shown how to manage decryption efficiently using the famous *Chinese Remainder Theorem*. With smaller expansion and lower cost compared with the other schemes, this scheme found great accept‐

*\** and a proper choice of *H* led to *k* =*l*(*n*). While the cost of encryption is not too high,

*standard specifications for public-key cryptography* (IEEE P1363).

decryption needs one exponentiation modulo *n* <sup>2</sup>

*\** , the authors achieve *k* = *p*. The value of the expansion obtained in the scheme is 3. One

and computing *c* = *g mr <sup>k</sup> mod n*). However, the decryption phase is more

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

11

*log* (*l*(*n*)). The authors claim that it is possible to choose the values

*q*, *p* and *q* being two large prime numbers as usual, and the group

to the power *λ*(*n*), and a multiplication

find two groups *G* and *H* such that *G* can be split into more than *k* = 2 parts.

an integer *r* ∈ *Zn*

*\**

is less and is given by:*O*(*l*(*n*)<sup>5</sup>

(Naccache & Stern, 1998).

ma, 1998). By taking *n* = *p* <sup>2</sup>

*G* = *Z p* 2

*G* = *Zn* <sup>2</sup>

Grigoriev and Ponomarenko propose another method to encrypt arbitrary finite groups homomorphically (Grigoriev & Ponomarenko, 2004). This method is based on the difficulty of the membership problem for groups of integer matrices, while in (Grigoriev & Ponomarenko, 2006) it is based on the difficulty of factoring. However, as before, this scheme is not efficient. Moreover, in (Grigoriev & Ponomarenko, 2004), an algebraically homomorphic cryptosystem over finite commutative rings is proposed. However, owing to its immense size, it is infeasible to implement in real-world applications.

#### **4. Some classical homomorphic encryption systems**

In this Section, we describe some classical homomorphic encryption systems which have created substantial interest among the researchers in the domain of cryptography. We start with the first probabilistic systems proposed by Goldwasser and Micali in 1982 (Goldwasser & Micali, 1982; Goldwasser & Micali, 1984) and then discuss the famous Paillier's encryption scheme (Paillier, 1999) and its improvements. Paillier's scheme and its variants are well-known for their efficiency and the high level of security that they provide for homomorphic encryp‐ tion. We do not discuss their mathematical considerations in detail, but summarize their important parameters and properties.

**Goldwasser-Micali scheme:** This scheme (Goldwasser & Micali, 1982; Goldwasser & Micali, 1984) is historically very important since many of subsequent proposals on homomorphic encryption were largely motivated by its approach. Like in RSA, in this scheme, we use computations modulo *n* = *p.q*, a product of two large primes. The encryption process is simple which uses a product and a square, whereas decryption is heavier and involves exponentiation. The complexity of the decryption process is: *O*(*k*.*l*(*p*)2), where *l*(*p*) denotes the number of bits in *p*. Unfortunately, this scheme has a limitation since its input consists of a single bit. First, this implies that encrypting *k* bits leads to a cost of *O*(*k*.*l*(*p*)2). This is not very efficient even if it may be considered as practical. The second concern is related to the issue of *expansion* – a single bit of plaintext is encrypted in *an integer modulo n*, that is, *l*(*n*) bits. This leads to a huge blow up of ciphertext causing a serious problem with this scheme.

Goldwasser-Micali (GM) scheme can be viewed from another perspective. When looked from this angle, the basic principle of this scheme is to partition a well-chosen subset of integers modulo *n* into two secret parts: *M*0 and *M*1. The encryption process selects a random element *Mb* to encrypt plaintext *b*, and the decryption process lets the user know in which part the randomly selected element lies. The essence of the scheme lies in the mechanism to determine the subset, and to partition it into *M*0 and *M*1. The scheme uses group theory to achieve this goal. The subset is the group *G* of invertible integers modulo *n* with a Jacobi symbol with respect to *n*, equal to 1. The partition is generated by another group *H* ⊂*G*, consisting of the elements that are invertible modulo *n* with a Jacobi symbol, with respect to a fixed factor *n*, equal to 1. With these settings of parameters, it is possible to split *G* into two parts – *H* and *G* \*H*. The generalization schemes of GM deal with these two groups. These schemes attempt to find two groups *G* and *H* such that *G* can be split into more than *k* = 2 parts.

if an efficient algebraically homomorphic cryptosystem on F2 exists. We will come back to this

Grigoriev and Ponomarenko propose another method to encrypt arbitrary finite groups homomorphically (Grigoriev & Ponomarenko, 2004). This method is based on the difficulty of the membership problem for groups of integer matrices, while in (Grigoriev & Ponomarenko, 2006) it is based on the difficulty of factoring. However, as before, this scheme is not efficient. Moreover, in (Grigoriev & Ponomarenko, 2004), an algebraically homomorphic cryptosystem over finite commutative rings is proposed. However, owing to its immense size, it is infeasible

In this Section, we describe some classical homomorphic encryption systems which have created substantial interest among the researchers in the domain of cryptography. We start with the first probabilistic systems proposed by Goldwasser and Micali in 1982 (Goldwasser & Micali, 1982; Goldwasser & Micali, 1984) and then discuss the famous Paillier's encryption scheme (Paillier, 1999) and its improvements. Paillier's scheme and its variants are well-known for their efficiency and the high level of security that they provide for homomorphic encryp‐ tion. We do not discuss their mathematical considerations in detail, but summarize their

**Goldwasser-Micali scheme:** This scheme (Goldwasser & Micali, 1982; Goldwasser & Micali, 1984) is historically very important since many of subsequent proposals on homomorphic encryption were largely motivated by its approach. Like in RSA, in this scheme, we use computations modulo *n* = *p.q*, a product of two large primes. The encryption process is simple which uses a product and a square, whereas decryption is heavier and involves exponentiation. The complexity of the decryption process is: *O*(*k*.*l*(*p*)2), where *l*(*p*) denotes the number of bits in *p*. Unfortunately, this scheme has a limitation since its input consists of a single bit. First, this implies that encrypting *k* bits leads to a cost of *O*(*k*.*l*(*p*)2). This is not very efficient even if it may be considered as practical. The second concern is related to the issue of *expansion* – a single bit of plaintext is encrypted in *an integer modulo n*, that is, *l*(*n*) bits. This leads to a huge

Goldwasser-Micali (GM) scheme can be viewed from another perspective. When looked from this angle, the basic principle of this scheme is to partition a well-chosen subset of integers modulo *n* into two secret parts: *M*0 and *M*1. The encryption process selects a random element *Mb* to encrypt plaintext *b*, and the decryption process lets the user know in which part the randomly selected element lies. The essence of the scheme lies in the mechanism to determine the subset, and to partition it into *M*0 and *M*1. The scheme uses group theory to achieve this goal. The subset is the group *G* of invertible integers modulo *n* with a Jacobi symbol with respect to *n*, equal to 1. The partition is generated by another group *H* ⊂*G*, consisting of the

issue in Section 6, where we discuss *fully homomorphic encryption schemes*.

10 Theory and Practice of Cryptography and Network Security Protocols and Technologies

**4. Some classical homomorphic encryption systems**

blow up of ciphertext causing a serious problem with this scheme.

to implement in real-world applications.

important parameters and properties.

**Benaloh's scheme:** Benaloh (Benaloh, 1988) is a generalization of GM scheme that enables one to manage inputs of *l*(*k*) bits, *k* being a prime satisfying some specified constraints. Encryption is similar as in GM scheme (encrypting a message *m* ∈{0, …., *k* - 1} is tantamount to picking an integer *r* ∈ *Zn \** and computing *c* = *g mr <sup>k</sup> mod n*). However, the decryption phase is more complex. If the input and output sizes are *l*(*k*) and *l*(*n*) bits respectively, the expansion is equal to *l*(*n*) / *l*(*k*). The value of expansion obtained in this approach is less than that achieved in GM. This makes the scheme more attractive. Moreover, the encryption is not too expensive as well. The overhead in the decryption process is estimated to be *O*( *k*.*l*(*k*)) for pre-computation which remains constant for each dynamic decryption step. This implies that the value of *k* has to be taken very small, which in turn limits the gain obtained on the value of expansion.

**Naccache-Stern scheme:** This scheme (Naccache & Stern, 1998) is an improvement of Benaloh's scheme. Using a value of the parameter *k* that is greater than that used in the Benaloh's scheme, it achieves a smaller expansion and thereby attains a superior efficiency. The encryption step is precisely the same as in Benaloh's scheme. However, decryption is different. The value of expansion is same as that in Benaloh's scheme, i.e., *l*(*n*) / *l*(*k*). However, the cost of decryption is less and is given by:*O*(*l*(*n*)<sup>5</sup> *log* (*l*(*n*)). The authors claim that it is possible to choose the values of the parameters in the system in such a way that the achieved value of expansion is 4 (Naccache & Stern, 1998).

**Okamoto-Uchiyama scheme:** To improve the performance of the earlier schemes on homo‐ morphic encryption, Okamoto and Uchiyama changed the base group *G* (Okamoto & Uchiya‐ ma, 1998). By taking *n* = *p* <sup>2</sup> *q*, *p* and *q* being two large prime numbers as usual, and the group *G* = *Z p* 2 *\** , the authors achieve *k* = *p*. The value of the expansion obtained in the scheme is 3. One of the biggest advantages of this scheme is that its security is equivalent to the factorization of *n*. However, a chosen-ciphertext attack has been proposed on this scheme that can break the factorization problem. Hence, currently it has a limited applicability. However, this scheme was used to design the EPOC systems (Okamoto et al., 2000) which is accepted in the *IEEE standard specifications for public-key cryptography* (IEEE P1363).

**Paillier scheme:** One of the most well-known homomorphic encryption schemes is due to Paillier (Paillier, 1999). It is an improvement over the earlier schemes in the sense that it is able to decrease the value of expansion from 3 to 2. The scheme uses *n* = *p*.*q* with gcd (*n*, *ϕ*(*n*))=1. As usual *p* and *q* are two large primes. However, it considered the group *G* = *Zn* <sup>2</sup> *\** and a proper choice of *H* led to *k* =*l*(*n*). While the cost of encryption is not too high, decryption needs one exponentiation modulo *n* <sup>2</sup> to the power *λ*(*n*), and a multiplication modulo *n*. This makes decryption a bit heavyweight process. The author has shown how to manage decryption efficiently using the famous *Chinese Remainder Theorem*. With smaller expansion and lower cost compared with the other schemes, this scheme found great accept‐ ance. In 2002, Cramer and Shoup proposed a general approach to achieve higher security against *adaptive chosen-ciphertext attacks* for certain cryptosystems with some particular algebraic properties (Cramer & Shoup, 2002). They applied their propositions on Paillier's original scheme and designed a stronger variant of homomorphic encryption. Bresson et al. proposed a slightly different version of a homomorphic encryption scheme that is more accurate for some applications (Bresson et al., 2003).

attacks, for instance, by application of hash functions, the use of redundancy or probabilistic schemes, this potential weakness leads us to the question why homomorphic schemes should be used instead of conventional cryptosystems under certain situations. The main reason for the interest in homomorphic cryptosystems is its wide application scope. There are theoretical as well as practical applications in different areas of cryptography. In the following, we list some of the main applications and properties of homomorphic schemes and summarize the

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

13

**Protection of mobile agents:** One of the most interesting applications of homomorphic encryption is its use in protection of mobile agents. As we have seen in Section 3, a homomor‐ phic encryption scheme on a special non-abelian group would lead to an algebraically homomorphic cryptosystem on the finite field *F2*. Since all conventional computer architectures are based on binary strings and only require multiplication and addition, such homomorphic cryptosystems would offer the possibility to encrypt a whole program so that it is still executable. Hence, it could be used to protect mobile agents against malicious hosts by encrypting them (Sander & Tschudin, 1998a). The protection of mobile agents by homomor‐ phic encryption can be used in two ways: (i) *computing with encrypted functions* and (ii) *computing with encrypted data*. Computation with encrypted functions is a special case of protection of mobile agents. In such scenarios, a secret function is publicly evaluated in such a way that the function remains secret. Using homomorphic cryptosystems, the encrypted function can be evaluated which guarantees its privacy. Homomorphic schemes also work on encrypted data to compute publicly while maintaining the privacy of the secret data. This can be done encrypting the data in advance and then exploiting the homomorphic property to

**Multiparty computation:** In multiparty computation schemes, several parties are interested in computing a common, public function on their inputs while keeping their individual inputs private. This problem belongs to the area of *computing with encrypted data*. Usually in multiparty computation protocols, we have a set of *n* ≥2 players whereas in computing with encrypted data scenarios *n* =2. Furthermore, in multi-party computation protocols, the function that should be computed is publicly known, whereas in the area of computing with encrypted data

**Secret sharing scheme:** In secret sharing schemes, parties share a secret so that no individual party can reconstruct the secret form the information available to it. However, if some parties cooperate with each other, they may be able to reconstruct the secret. In this scenario, the homomorphic property implies that the composition of the shares of the secret is equivalent

**Threshold schemes:** Both secret sharing schemes and the multiparty computation schemes are examples of threshold schemes. Threshold schemes can be implemented using homomor‐

**5.1. Some applications of homomorphic encryption schemes**

idea behind them.

compute with encrypted data.

it is a private input of one party.

phic encryption techniques.

to the shares of the composition of the secrets.

**Damgard-Jurik scheme:** Damgard and Jurik propose a generalization of Paillier's scheme to groups of the form *Zn <sup>s</sup>*+1 *\** for *s* >0 (Damgard & Jurik, 2001). In this scheme, choice of larger values of *s* will achieve lower values of expansion. This scheme can be used in a number of applica‐ tions. For example, we can mention the adaptation of the size of the plaintext, the use of threshold cryptography, electronic voting, and so on. To encrypt a message, *m*∈*Zn \** , one picks at random *r* ∈*Zn \** and computes *g mr <sup>n</sup> <sup>s</sup>* ∈*Zn <sup>s</sup>*+1. The authors show that if one can break the scheme for a given value *s* =*σ*, then one can break it for *s* =*σ* - 1. They also show that the semantic security of this scheme is equivalent to that of Paillier's scheme. The value of expansion can be computed using: 1 + 1 /*s*. It is clear that expansion can attain a value close to 1 if *s* is sufficiently large. The ratio of the cost for encryption in this scheme over Paillier's scheme can be estimated to be: *s*(*<sup>s</sup>* <sup>+</sup> 1)(*<sup>s</sup>* <sup>+</sup> 2) <sup>6</sup> . The same ratio for the decryption process will have value equal to: (*<sup>s</sup>* <sup>+</sup> 1)(*<sup>s</sup>* <sup>+</sup> 2) <sup>6</sup> . Even if this scheme has a lower value of expansion as compared to Paillier's scheme, it is computationally more intensive. Moreover, if we want to encrypt or decrypt *k* blocks of *l*(*n*) bits, running Paillier's scheme *k* times is less expensive than running Damgard-Jurik's scheme.

**Galbraith scheme:** This is an adaptation of the existing homomorphic encryption schemes in the context of elliptic curves (Galbraith, 2002). Its expansion is equal to 3. For *s* =1, the ratio of the encryption cost for this scheme over that of Paillier's scheme can be estimated to be about 7, while the same ratio for the cost of decryption cost is about 14 for the same value of *s*. However, the most important advantage of this scheme is that the cost of encryption and decryption can be decreased using larger values of *s*. In addition, the security of the scheme increases with the increase in the value of *s* as it is the case in Damgard-Jurik's scheme.

**Castagnos scheme:** Castagnos explored the possibility of improving the performance of homomorphic encryption schemes using quadratic fields quotations (Castagnos, 2006; Castagnos, 2007). This scheme achieves an expansion value of 3 and the ratio of encryption/ decryption cost with *s* =1 over Paillier's scheme can be estimated to be about 2.

#### **5. Applications and properties of homomorphic encryption schemes**

An inherent drawback of homomorphic cryptosystems is that attacks on these systems might possibly exploit their additional structural information. For instance, using plain RSA (Rivest et al., 1978b) for signing, the multiplication of two signatures yields a valid signature of the product of the two corresponding messages. Although there are many ways to avoid such attacks, for instance, by application of hash functions, the use of redundancy or probabilistic schemes, this potential weakness leads us to the question why homomorphic schemes should be used instead of conventional cryptosystems under certain situations. The main reason for the interest in homomorphic cryptosystems is its wide application scope. There are theoretical as well as practical applications in different areas of cryptography. In the following, we list some of the main applications and properties of homomorphic schemes and summarize the idea behind them.

#### **5.1. Some applications of homomorphic encryption schemes**

ance. In 2002, Cramer and Shoup proposed a general approach to achieve higher security against *adaptive chosen-ciphertext attacks* for certain cryptosystems with some particular algebraic properties (Cramer & Shoup, 2002). They applied their propositions on Paillier's original scheme and designed a stronger variant of homomorphic encryption. Bresson et al. proposed a slightly different version of a homomorphic encryption scheme that is more

**Damgard-Jurik scheme:** Damgard and Jurik propose a generalization of Paillier's scheme to

of *s* will achieve lower values of expansion. This scheme can be used in a number of applica‐ tions. For example, we can mention the adaptation of the size of the plaintext, the use of

scheme for a given value *s* =*σ*, then one can break it for *s* =*σ* - 1. They also show that the semantic security of this scheme is equivalent to that of Paillier's scheme. The value of expansion can be computed using: 1 + 1 /*s*. It is clear that expansion can attain a value close to 1 if *s* is sufficiently large. The ratio of the cost for encryption in this scheme over Paillier's

Paillier's scheme, it is computationally more intensive. Moreover, if we want to encrypt or decrypt *k* blocks of *l*(*n*) bits, running Paillier's scheme *k* times is less expensive than running

**Galbraith scheme:** This is an adaptation of the existing homomorphic encryption schemes in the context of elliptic curves (Galbraith, 2002). Its expansion is equal to 3. For *s* =1, the ratio of the encryption cost for this scheme over that of Paillier's scheme can be estimated to be about 7, while the same ratio for the cost of decryption cost is about 14 for the same value of *s*. However, the most important advantage of this scheme is that the cost of encryption and decryption can be decreased using larger values of *s*. In addition, the security of the scheme increases with the increase in the value of *s* as it is the case in Damgard-Jurik's scheme.

**Castagnos scheme:** Castagnos explored the possibility of improving the performance of homomorphic encryption schemes using quadratic fields quotations (Castagnos, 2006; Castagnos, 2007). This scheme achieves an expansion value of 3 and the ratio of encryption/

decryption cost with *s* =1 over Paillier's scheme can be estimated to be about 2.

**5. Applications and properties of homomorphic encryption schemes**

An inherent drawback of homomorphic cryptosystems is that attacks on these systems might possibly exploit their additional structural information. For instance, using plain RSA (Rivest et al., 1978b) for signing, the multiplication of two signatures yields a valid signature of the product of the two corresponding messages. Although there are many ways to avoid such

threshold cryptography, electronic voting, and so on. To encrypt a message, *m*∈*Zn*

*\** for *s* >0 (Damgard & Jurik, 2001). In this scheme, choice of larger values

<sup>6</sup> . Even if this scheme has a lower value of expansion as compared to

∈*Zn <sup>s</sup>*+1. The authors show that if one can break the

<sup>6</sup> . The same ratio for the decryption process will have

*\**

, one picks

accurate for some applications (Bresson et al., 2003).

12 Theory and Practice of Cryptography and Network Security Protocols and Technologies

and computes *g mr <sup>n</sup> <sup>s</sup>*

groups of the form *Zn <sup>s</sup>*+1

*\**

scheme can be estimated to be: *s*(*<sup>s</sup>* <sup>+</sup> 1)(*<sup>s</sup>* <sup>+</sup> 2)

at random *r* ∈*Zn*

value equal to: (*<sup>s</sup>* <sup>+</sup> 1)(*<sup>s</sup>* <sup>+</sup> 2)

Damgard-Jurik's scheme.

**Protection of mobile agents:** One of the most interesting applications of homomorphic encryption is its use in protection of mobile agents. As we have seen in Section 3, a homomor‐ phic encryption scheme on a special non-abelian group would lead to an algebraically homomorphic cryptosystem on the finite field *F2*. Since all conventional computer architectures are based on binary strings and only require multiplication and addition, such homomorphic cryptosystems would offer the possibility to encrypt a whole program so that it is still executable. Hence, it could be used to protect mobile agents against malicious hosts by encrypting them (Sander & Tschudin, 1998a). The protection of mobile agents by homomor‐ phic encryption can be used in two ways: (i) *computing with encrypted functions* and (ii) *computing with encrypted data*. Computation with encrypted functions is a special case of protection of mobile agents. In such scenarios, a secret function is publicly evaluated in such a way that the function remains secret. Using homomorphic cryptosystems, the encrypted function can be evaluated which guarantees its privacy. Homomorphic schemes also work on encrypted data to compute publicly while maintaining the privacy of the secret data. This can be done encrypting the data in advance and then exploiting the homomorphic property to compute with encrypted data.

**Multiparty computation:** In multiparty computation schemes, several parties are interested in computing a common, public function on their inputs while keeping their individual inputs private. This problem belongs to the area of *computing with encrypted data*. Usually in multiparty computation protocols, we have a set of *n* ≥2 players whereas in computing with encrypted data scenarios *n* =2. Furthermore, in multi-party computation protocols, the function that should be computed is publicly known, whereas in the area of computing with encrypted data it is a private input of one party.

**Secret sharing scheme:** In secret sharing schemes, parties share a secret so that no individual party can reconstruct the secret form the information available to it. However, if some parties cooperate with each other, they may be able to reconstruct the secret. In this scenario, the homomorphic property implies that the composition of the shares of the secret is equivalent to the shares of the composition of the secrets.

**Threshold schemes:** Both secret sharing schemes and the multiparty computation schemes are examples of threshold schemes. Threshold schemes can be implemented using homomor‐ phic encryption techniques.

**Zero-knowledge proofs:** This is a fundamental primitive of cryptographic protocols and serves as an example of a theoretical application of homomorphic cryptosystems. Zeroknowledge proofs are used to prove knowledge of some private information. For instance, consider the case where a user has to prove his identity to a host by logging in with her account and private password. Obviously, in such a protocol the user wants her private information (i.e., her password) to stay private and not to be leaked during the protocol operation. Zeroknowledge proofs guarantee that the protocol communicates exactly the knowledge that was intended, and no (zero) extra knowledge. Examples of zero-knowledge proofs using homo‐ morphic property can be found in (Cramer & Damgard, 1998).

random. A desirable property to build such mix-nets is re-encryption which is achieved by using homomorphic encryption. More information about applications of homomorphic

Homomorphic encryption schemes have some interesting mathematical properties. In the

**Re-randomizable encryption/re-encryption:** Re-randomizable cryptosystems (Groth, 2004) are probabilistic cryptosystems with the additional property that given the public key *ke* and

perfectly indistinguishable from a *fresh* encryption of *m* under the public key *ke*. This property

It obvious that every probabilistic homomorphic cryptosystem is re-randomizable. Without loss of generality, we assume that the cryptosystem is additively homomorphic. Given

)) <sup>=</sup> *Eke*

where *r'* is an appropriate random number. We note that this is exactly what a *blinding*

**Random self-reducibility:** Along with the possibility of re-encryption comes the property of random self-reducibility concerning the problem of computing the plaintext from the cipher‐ text. A cryptosystem is called *random self-reducible* if any algorithm that can break a non-trivial fraction of ciphertexts can also break a random instance with significant probability. This

**Verifiable encryptions / fair encryptions:** If an encryption is verifiable, it provides a mecha‐ nism to check the correctness of encrypted data without compromising on the secrecy of the data. For instance, this is useful in voting schemes to convince any observer that the encrypted name of a candidate, i.e., the encrypted vote is indeed in the list of candidates. A cryptosystem with this property that is based on homomorphic encryption can be found in (Poupard & Stern,

In 2009, Gentry described the first plausible construction of a fully homomorphic cryptosystem that supports both addition and multiplication (Gentry, 2009). Gentry's proposed fully

(0, *r* ''

property is discussed in detail in (Damgard et al., 2010; Sander et al., 1999).

2000). Verifiable encryptions are also called *fair encryptions*.

**6. Fully homomorphic encryption schemes**

(*m*, *r*) of a message *m*∈*M* under the public key *ke* and a random number

(0, *r ''*

(*m* + 0, *r* '

) <sup>=</sup> *Eke*

(*m*, *r* ' )

(*m*, *<sup>r</sup>*) into another encryption *Eke*

(*m*, *r '*) that is

) for a random number *r''* and hence

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

15

encryption in mix-nets can be found in (Golle et al., 2004; Damgard & Jurik, 2003).

**5.2. Some properties of homomorphic encryption schemes**

following, we mention some of these properties.

*r* ∈*Z* it is possible to efficiently convert *Eke*

(*m*, *<sup>r</sup>*) and the public key *ke*, we can compute *Eke*

(*m*, *<sup>r</sup>*), *Eke*

*Add*(*Eke*

an encryption *Eke*

*Eke*

is also called *re-encryption*.

compute the following:

*algorithm* does.

**Election schemes:** In election schemes, the homomorphic property provides a tool to obtain the tally given the encrypted votes without decrypting the individual votes.

**Watermarking and fingerprinting schemes:** Digital watermarking and fingerprinting schemes embed additional information into digital data. The homomorphic property is used to add a mark to previously encrypted data. In general, watermarks are used to identify the owner/seller of digital goods to ensure the copyright. In fingerprinting schemes, the person who buys the data should be identifiable by the merchant to ensure that data is not illegally redistributed. Further properties of such schemes can be found in (Pfitzmann & Waidner, 1997; Adelsbach et al. 2002).

**Oblivious transfer:** It is an interesting cryptographic primitive. Usually in a two-party 1-outof-2 oblivious transfer protocol, the first party sends a bit to the second party in such as way that the second party receives it with probability ½, without the first party knowing whether or not the second party received the bit. An example of such a protocol that uses the homo‐ morphic property can be found in (Lipmaa, 2003).

**Commitment schemes:** Commitment schemes are some fundamental cryptographic primi‐ tives. In a commitment scheme, a player makes a commitment. She is able to choose a value from some set and commit to her choice such that she can no longer change her mind. She does not have to reveal her choice although she may do so at some point later. Some commitment schemes can be efficiently implemented using homomorphic property.

**Lottery protocols:** Usually in a cryptographic lottery, a number pointing to the winning ticket has to be jointly and randomly chosen by all participants. Using a homomorphic encryption scheme this can be realized as follows: Each player chooses a random number which she encrypts. Then using the homomorphic property the encryption of the sum of the random values can be efficiently computed. The combination of this and a *threshold decryption scheme* leads to the desired functionality. More details about homomorphic properties of lottery schemes can be found in (Fouque et al., 2000).

**Mix-nets:** Mix-nets are protocols that provide anonymity for senders by collecting encrypted messages from several users. For instance, one can consider mix-nets that collect ciphertexts and output the corresponding plaintexts in a randomly permuted order. In such a scenario, privacy is achieved by requiring that the permutation that matches inputs to outputs is kept secret to anyone except the mix-net. In particular, determining a correct input/output pair, i.e., a ciphertext with corresponding plaintext, should not be more effective then guessing one at random. A desirable property to build such mix-nets is re-encryption which is achieved by using homomorphic encryption. More information about applications of homomorphic encryption in mix-nets can be found in (Golle et al., 2004; Damgard & Jurik, 2003).

#### **5.2. Some properties of homomorphic encryption schemes**

**Zero-knowledge proofs:** This is a fundamental primitive of cryptographic protocols and serves as an example of a theoretical application of homomorphic cryptosystems. Zeroknowledge proofs are used to prove knowledge of some private information. For instance, consider the case where a user has to prove his identity to a host by logging in with her account and private password. Obviously, in such a protocol the user wants her private information (i.e., her password) to stay private and not to be leaked during the protocol operation. Zeroknowledge proofs guarantee that the protocol communicates exactly the knowledge that was intended, and no (zero) extra knowledge. Examples of zero-knowledge proofs using homo‐

**Election schemes:** In election schemes, the homomorphic property provides a tool to obtain

**Watermarking and fingerprinting schemes:** Digital watermarking and fingerprinting schemes embed additional information into digital data. The homomorphic property is used to add a mark to previously encrypted data. In general, watermarks are used to identify the owner/seller of digital goods to ensure the copyright. In fingerprinting schemes, the person who buys the data should be identifiable by the merchant to ensure that data is not illegally redistributed. Further properties of such schemes can be found in (Pfitzmann & Waidner,

**Oblivious transfer:** It is an interesting cryptographic primitive. Usually in a two-party 1-outof-2 oblivious transfer protocol, the first party sends a bit to the second party in such as way that the second party receives it with probability ½, without the first party knowing whether or not the second party received the bit. An example of such a protocol that uses the homo‐

**Commitment schemes:** Commitment schemes are some fundamental cryptographic primi‐ tives. In a commitment scheme, a player makes a commitment. She is able to choose a value from some set and commit to her choice such that she can no longer change her mind. She does not have to reveal her choice although she may do so at some point later. Some commitment

**Lottery protocols:** Usually in a cryptographic lottery, a number pointing to the winning ticket has to be jointly and randomly chosen by all participants. Using a homomorphic encryption scheme this can be realized as follows: Each player chooses a random number which she encrypts. Then using the homomorphic property the encryption of the sum of the random values can be efficiently computed. The combination of this and a *threshold decryption scheme* leads to the desired functionality. More details about homomorphic properties of lottery

**Mix-nets:** Mix-nets are protocols that provide anonymity for senders by collecting encrypted messages from several users. For instance, one can consider mix-nets that collect ciphertexts and output the corresponding plaintexts in a randomly permuted order. In such a scenario, privacy is achieved by requiring that the permutation that matches inputs to outputs is kept secret to anyone except the mix-net. In particular, determining a correct input/output pair, i.e., a ciphertext with corresponding plaintext, should not be more effective then guessing one at

schemes can be efficiently implemented using homomorphic property.

morphic property can be found in (Cramer & Damgard, 1998).

14 Theory and Practice of Cryptography and Network Security Protocols and Technologies

1997; Adelsbach et al. 2002).

morphic property can be found in (Lipmaa, 2003).

schemes can be found in (Fouque et al., 2000).

the tally given the encrypted votes without decrypting the individual votes.

Homomorphic encryption schemes have some interesting mathematical properties. In the following, we mention some of these properties.

**Re-randomizable encryption/re-encryption:** Re-randomizable cryptosystems (Groth, 2004) are probabilistic cryptosystems with the additional property that given the public key *ke* and an encryption *Eke* (*m*, *r*) of a message *m*∈*M* under the public key *ke* and a random number *r* ∈*Z* it is possible to efficiently convert *Eke* (*m*, *<sup>r</sup>*) into another encryption *Eke* (*m*, *r '*) that is perfectly indistinguishable from a *fresh* encryption of *m* under the public key *ke*. This property is also called *re-encryption*.

It obvious that every probabilistic homomorphic cryptosystem is re-randomizable. Without loss of generality, we assume that the cryptosystem is additively homomorphic. Given *Eke* (*m*, *<sup>r</sup>*) and the public key *ke*, we can compute *Eke* (0, *r ''* ) for a random number *r''* and hence compute the following:

*Add*(*Eke* (*m*, *<sup>r</sup>*), *Eke* (0, *r* '' )) <sup>=</sup> *Eke* (*m* + 0, *r* ' ) <sup>=</sup> *Eke* (*m*, *r* ' )

where *r'* is an appropriate random number. We note that this is exactly what a *blinding algorithm* does.

**Random self-reducibility:** Along with the possibility of re-encryption comes the property of random self-reducibility concerning the problem of computing the plaintext from the cipher‐ text. A cryptosystem is called *random self-reducible* if any algorithm that can break a non-trivial fraction of ciphertexts can also break a random instance with significant probability. This property is discussed in detail in (Damgard et al., 2010; Sander et al., 1999).

**Verifiable encryptions / fair encryptions:** If an encryption is verifiable, it provides a mecha‐ nism to check the correctness of encrypted data without compromising on the secrecy of the data. For instance, this is useful in voting schemes to convince any observer that the encrypted name of a candidate, i.e., the encrypted vote is indeed in the list of candidates. A cryptosystem with this property that is based on homomorphic encryption can be found in (Poupard & Stern, 2000). Verifiable encryptions are also called *fair encryptions*.

#### **6. Fully homomorphic encryption schemes**

In 2009, Gentry described the first plausible construction of a fully homomorphic cryptosystem that supports both addition and multiplication (Gentry, 2009). Gentry's proposed fully homomorphic encryption consists of several steps: First, it constructs a *somewhat homomor‐ phic* scheme that supports evaluating low-degree polynomials on the encrypted data. Next, it *squashes* the decryption procedure so that it can be expressed as a low-degree polynomial which is supported by the scheme, and finally, it applies a *bootstrapping transformation* to obtain a fully homomorphic scheme. The essential approach of this scheme is to derive and establish a process that can evaluate polynomials of high-enough degree using a decryption procedure that can be expressed as a polynomial of low-enough degree. Once the degree of polynomials that can be evaluated by the scheme exceeds the degree of the decryption polynomial by a factor of two, the scheme is called *bootstrappable* and it can then be converted into a fully homomorphic scheme.

adversary to distinguish this sequence of samples from random pairs of ring elements. The authors have shown that this simple assumption can be very efficiently reduced to the worst case hardness of short-vector problems on ideal lattices. They have also shown how to construct a very efficient ring counterpart to Regev's public-key encryption scheme (Regev, 2005), as well as a counterpart to the identity-based encryption scheme presented in (Gentry et al., 2008) by using the basis sampling techniques in (Regev, 2005). The scheme presented in (Lyubashevsky et al., 2010) is very elegant and efficient since it is not dependent on any

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

17

Brakerski and Vaikuntanathan raised a natural question that whether the above approaches (i.e., ideal lattices and RLWE) can be effectively exploited so that benefits of both these approaches can be achieved at the same time – namely the functional powerfulness on the one hand (i.e., the ideal lattice approach) and the simplicity and efficiency of the other (i.e., RLWE). They have shown that indeed this can be done (Brakerski & Vaikuntanathan, 2011). They have constructed a somewhat homomorphic encryption scheme based on RLWE. The scheme inherits the simplicity and efficiency, as well as the worst case relation to ideal lattices. Moreover, the scheme enjoys *key dependent message security* (KDM security, also known as *circular security*), since it can securely encrypt polynomial functions (over an appropriately defined ring) of its own secret key. The significance of this feature of the scheme in context of homomorphic encryption has been clearly explained by the authors. The authors argue that all known constructions of fully homomorphic encryption employ a bootstrapping technique that enforces the public key of the scheme to grow linearly with the maximal depth of evaluated circuits. This is a major drawback with regard to the usability and the efficiency of the scheme. However, the size of the public key can be made independent of the circuit depth if the somewhat homomorphic scheme can securely encrypt its own secret key. With the design of this scheme, the authors have solved an open problem - achieving *circular secure somewhat homomorphic encryption.* They have also computed the circular security of their scheme with respect to the representation of the secret key as a ring element, where bootstrapping requires circular security with respect to the bitwise representation of the secret key (actually, the bitwise representation of the *squashed* secret key). Since there is no prior work that studies a possible co-existence between somewhat homomorphism with any form of circular security, the work is a significant first step towards removing the assumption (Brakerski & Vaikunta‐ nathan, 2011). The authors have also shown how to transform the proposed scheme into a fully homomorphic encryption scheme following Gentry's blueprint of *squashing* and *bootstrap‐ ping*. Applying the techniques presented in (Brakerski & Vaikuntanathan, 2011a), the authors argue that *squashing* can even be avoided at the cost of relying on *sparse* version of RLWE that is not known to reduce to worst case scenarios. This greatly enhances the efficiency of the proposed scheme in practical applications. The proposed scheme is also *additively keyhomomorphic*– a property that has found applications in achieving security against *key-related*

Smart and Vercauteren (Smart & Vercauteren, 2010) present a fully homomorphic encryption scheme that has smaller key and ciphertext sizes. The construction proposed by the authors follows the fully homomorphic construction based on ideal lattices proposed by Gentry

complex computations over ideal lattices.

*attacks* (Applebaum et al., 2011).

For designing a bootstrappable scheme, Gentry presented a somewhat homomorphic scheme (Gentry, 2009) which is roughly a GGH (Goldreich, Goldwasser, Halevi)-type scheme (Goldreich et al., 1997; Micciancio, 2001) over ideal lattices. Gentry later proved that with an appropriate key-generation procedure, the security of that scheme can be reduced to the worstcase hardness of some lattice problems in ideal lattice constructions (Gentry, 2010). Since this somewhat homomorphic scheme is not bootstrappable, Gentry described a transformation to squash the decryption procedure, reducing the degree of the decryption polynomial (Gentry, 2009). This is done by adding to the public key, an additional hint about the secret key in the form of a *sparse subset-sum problem* (SSSP). The public key is augmented with a big set of vectors in such a way that there exists a very sparse subset of them that adds up to the secret key. A ciphertext of the underlying scheme can be *post-processed* using this additional hint and the post-processed ciphertext can be decrypted with a low-degree polynomial, thereby achieving a bootstrappable scheme.

Gentry's construction is quite involved – the secret key, even in the private key version of his scheme is a short basis of a *random ideal lattice*. Generating pairs of public and secret bases with the right distributions appropriate for the worst-case to average-case reduction is technically quite complicated. A significant research effort has been devoted to increase the efficiency of its implementation (Gentry & Halevi, 2011; Smart & Vercauteren, 2010).

A parallel line of work that utilizes ideal lattices in cryptography dates back to the NTRU cryptosystem (Hoffstein et al., 1998). This approach uses ideal lattices for efficient crypto‐ graphic constructions. The additional structure of ideal lattices, compared to ordinary lattices, makes their representation more powerful and enables faster computation. Motivated by the work of Micciancio (Micciancio, 2007), a significant number of work (Peikert & Rosen, 2006; Lyubashevsky & Micciancio, 2006; Peikert & Rosen, 2007; Lyubashevsky et al., 2008; Lyba‐ shevsky & Micciancio, 2008) has produced efficient constructions of various cryptographic primitives whose security can formally be reduced to the hardness of short-vector problems in ideal lattices (Brakerski & Vaikuntanathan, 2011).

Lyubashevsky et al. (Lyubashevsky et al., 2010) present the *ring learning with errors* (RLWE) assumption which is the *ring counterpart* of Regev's learning with errors assumption (Regev, 2005). In a nutshell, the assumption is that given polynomially many samples over a certain ring of the form (*ai* , *ai s* + *ei* ), where *s* is a random *secret ring element*, *ai* 's are distributed uniformly randomly in the ring, and *ei* are *small* ring elements, it will be impossible for an adversary to distinguish this sequence of samples from random pairs of ring elements. The authors have shown that this simple assumption can be very efficiently reduced to the worst case hardness of short-vector problems on ideal lattices. They have also shown how to construct a very efficient ring counterpart to Regev's public-key encryption scheme (Regev, 2005), as well as a counterpart to the identity-based encryption scheme presented in (Gentry et al., 2008) by using the basis sampling techniques in (Regev, 2005). The scheme presented in (Lyubashevsky et al., 2010) is very elegant and efficient since it is not dependent on any complex computations over ideal lattices.

homomorphic encryption consists of several steps: First, it constructs a *somewhat homomor‐ phic* scheme that supports evaluating low-degree polynomials on the encrypted data. Next, it *squashes* the decryption procedure so that it can be expressed as a low-degree polynomial which is supported by the scheme, and finally, it applies a *bootstrapping transformation* to obtain a fully homomorphic scheme. The essential approach of this scheme is to derive and establish a process that can evaluate polynomials of high-enough degree using a decryption procedure that can be expressed as a polynomial of low-enough degree. Once the degree of polynomials that can be evaluated by the scheme exceeds the degree of the decryption polynomial by a factor of two, the scheme is called *bootstrappable* and it can then be converted into a fully

16 Theory and Practice of Cryptography and Network Security Protocols and Technologies

For designing a bootstrappable scheme, Gentry presented a somewhat homomorphic scheme (Gentry, 2009) which is roughly a GGH (Goldreich, Goldwasser, Halevi)-type scheme (Goldreich et al., 1997; Micciancio, 2001) over ideal lattices. Gentry later proved that with an appropriate key-generation procedure, the security of that scheme can be reduced to the worstcase hardness of some lattice problems in ideal lattice constructions (Gentry, 2010). Since this somewhat homomorphic scheme is not bootstrappable, Gentry described a transformation to squash the decryption procedure, reducing the degree of the decryption polynomial (Gentry, 2009). This is done by adding to the public key, an additional hint about the secret key in the form of a *sparse subset-sum problem* (SSSP). The public key is augmented with a big set of vectors in such a way that there exists a very sparse subset of them that adds up to the secret key. A ciphertext of the underlying scheme can be *post-processed* using this additional hint and the post-processed ciphertext can be decrypted with a low-degree polynomial, thereby achieving

Gentry's construction is quite involved – the secret key, even in the private key version of his scheme is a short basis of a *random ideal lattice*. Generating pairs of public and secret bases with the right distributions appropriate for the worst-case to average-case reduction is technically quite complicated. A significant research effort has been devoted to increase the efficiency of

A parallel line of work that utilizes ideal lattices in cryptography dates back to the NTRU cryptosystem (Hoffstein et al., 1998). This approach uses ideal lattices for efficient crypto‐ graphic constructions. The additional structure of ideal lattices, compared to ordinary lattices, makes their representation more powerful and enables faster computation. Motivated by the work of Micciancio (Micciancio, 2007), a significant number of work (Peikert & Rosen, 2006; Lyubashevsky & Micciancio, 2006; Peikert & Rosen, 2007; Lyubashevsky et al., 2008; Lyba‐ shevsky & Micciancio, 2008) has produced efficient constructions of various cryptographic primitives whose security can formally be reduced to the hardness of short-vector problems

Lyubashevsky et al. (Lyubashevsky et al., 2010) present the *ring learning with errors* (RLWE) assumption which is the *ring counterpart* of Regev's learning with errors assumption (Regev, 2005). In a nutshell, the assumption is that given polynomially many samples over a certain

), where *s* is a random *secret ring element*, *ai*

are *small* ring elements, it will be impossible for an

's are distributed

its implementation (Gentry & Halevi, 2011; Smart & Vercauteren, 2010).

in ideal lattices (Brakerski & Vaikuntanathan, 2011).

, *ai s* + *ei*

uniformly randomly in the ring, and *ei*

homomorphic scheme.

a bootstrappable scheme.

ring of the form (*ai*

Brakerski and Vaikuntanathan raised a natural question that whether the above approaches (i.e., ideal lattices and RLWE) can be effectively exploited so that benefits of both these approaches can be achieved at the same time – namely the functional powerfulness on the one hand (i.e., the ideal lattice approach) and the simplicity and efficiency of the other (i.e., RLWE). They have shown that indeed this can be done (Brakerski & Vaikuntanathan, 2011). They have constructed a somewhat homomorphic encryption scheme based on RLWE. The scheme inherits the simplicity and efficiency, as well as the worst case relation to ideal lattices. Moreover, the scheme enjoys *key dependent message security* (KDM security, also known as *circular security*), since it can securely encrypt polynomial functions (over an appropriately defined ring) of its own secret key. The significance of this feature of the scheme in context of homomorphic encryption has been clearly explained by the authors. The authors argue that all known constructions of fully homomorphic encryption employ a bootstrapping technique that enforces the public key of the scheme to grow linearly with the maximal depth of evaluated circuits. This is a major drawback with regard to the usability and the efficiency of the scheme. However, the size of the public key can be made independent of the circuit depth if the somewhat homomorphic scheme can securely encrypt its own secret key. With the design of this scheme, the authors have solved an open problem - achieving *circular secure somewhat homomorphic encryption.* They have also computed the circular security of their scheme with respect to the representation of the secret key as a ring element, where bootstrapping requires circular security with respect to the bitwise representation of the secret key (actually, the bitwise representation of the *squashed* secret key). Since there is no prior work that studies a possible co-existence between somewhat homomorphism with any form of circular security, the work is a significant first step towards removing the assumption (Brakerski & Vaikunta‐ nathan, 2011). The authors have also shown how to transform the proposed scheme into a fully homomorphic encryption scheme following Gentry's blueprint of *squashing* and *bootstrap‐ ping*. Applying the techniques presented in (Brakerski & Vaikuntanathan, 2011a), the authors argue that *squashing* can even be avoided at the cost of relying on *sparse* version of RLWE that is not known to reduce to worst case scenarios. This greatly enhances the efficiency of the proposed scheme in practical applications. The proposed scheme is also *additively keyhomomorphic*– a property that has found applications in achieving security against *key-related attacks* (Applebaum et al., 2011).

Smart and Vercauteren (Smart & Vercauteren, 2010) present a fully homomorphic encryption scheme that has smaller key and ciphertext sizes. The construction proposed by the authors follows the fully homomorphic construction based on ideal lattices proposed by Gentry (Gentry, 2009). It produces a fully homomorphic scheme from a *somewhat homomorphic scheme*. For a somewhat homomorphic scheme, the public and the private keys consist of two large integers (one of which is shared by both the public and the private key), and the ciphertext consists of one large integer. The scheme (Smart & Vercauteren, 2010) has smaller *ciphertext blow up* and reduced key size than in Gentry's scheme based on ideal lattices. Moreover, the scheme also allows and efficient homomorphic encryption over any field of characteristics two. More specifically, it uses arithmetic of *cyclotomic number fields*. In particular, the authors have focused on the field generated by the polynomial: *F* (*X* )= *X* <sup>2</sup>*<sup>n</sup>* + 1. However, they also noted that the scheme could be applied with arbitrary (even non-cyclotomic) number fields as well. In spite of having many advantages, the major problem with this scheme is that the key generation method is very slow.

Moreover, Stehle and Steinfield have relaxed the definition of fully homomorphic encryption to allow for a negligible but non-zero probability of decryption error. They have shown that the randomness in the *SplitKey* key generation for the *squashed decryption algorithm* (i.e., the decryption algorithms of the bootstrappable scheme) in the Gentry's scheme can be gainfully exploited to allow a negligible decryption error probability. This decryption error, although negligible in value, can lead to rounding precision used in representing the ciphertext components that is almost half the value of the precision as achieved in Gentry's scheme

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

19

In (Chunsheng, 2012), Chunsheng proposed a modification of the fully homomorphic encryp‐ tion scheme of Smart and Vercauteren (Smart & Vercauteren, 2010). The author has applied a *self-loop bootstrappable technique* so that the security of the modified scheme only depends on the hardness of the *polynomial coset problem* and does not require any assumption of the *sparse subset problem* as required in the original work of Smart and Vercauteren (Smart & Vercauteren, 2010). In addition, the author have constructed a *non-self-loop fully homomorphic encryption scheme* that uses *cycle keys*. In a nutshell, the security of the improved fully homomorphic encryption scheme in this work is based on use of three mathematical approaches: (i) hardness of factoring integer problem, (ii) solving Diophantine equation problem, and (iii) finding

Boneh and Freeman propose a linearly homomorphic signature scheme that authenticates vector subspaces of a given ambient space (Boneh & Freeman, 2011). The scheme has several novel features that were not present in any of the existing similar schemes. First, the scheme is the first of its kind that enables *authentication of vectors over binary fields*; previous schemes could not authenticate vectors with large or growing coefficients. Second, the scheme is the only scheme that is based on the *problem of finding short vectors in integer lattices*, and therefore, it enjoys the worst-case security guarantee that is common to *lattice-based cryptosystems*. The scheme can be used to authenticate linear transformations of signed data, such as those arising when computing mean and Fourier transform or in networks that use *network coding* (Boneh & Freeman, 2011). The work has three major contributions in the state of the art as identified by the authors: (i) *Homomorphic signatures over F2*: the authors have constructed the first *unforgeable linearly homomorphic signature scheme* that authenticates vectors with coordinates in **F2**. It is an example of a cryptographic primitive that can be built using lattice models, but cannot be built using bilinear maps or other traditional algebraic methods based on factoring or discrete log type problems. The scheme can be modified to authenticate vectors with coefficients in other small fields, including prime fields and extension fields such as **F**2**<sup>d</sup>**. Moreover, the scheme is private, in the sense that a derived signature on a vector **v** leaks no information about the original signed vectors beyond what is revealed by **v**. (ii) *A simple k-time signature without random oracles:* the authors have presented a stateless signature scheme and have proved that it is secure in the standard model when used to sign at most *k* messages, for small values of *k*. The public key of the scheme is significantly smaller than that of any other stateless lattice-based signature scheme that can sign multiple large messages and is secure in the standard model. The construction proposed by the authors can be viewed as *removing the random oracle* from the signature scheme of Gentry, Peikert, and Vaikuntanathan (Gentry et al.,

(Gentry, 2009), which involves zero error probability.

approximate greatest common divisor problem.

Gentry and Halevi presented a novel implementation approach for the variant of Smart and Vercauteren proposition (Smart & Vercauteren, 2010), which had a greatly improved key generation phase (Gentry & Halevi, 2011). In particular, the authors have noted that the key generation (for cyclotomic fields) is essentially an application of a *Discrete Fourier Transform* (DFT), followed by a small quantum of computation, and then application of the *inverse* transform. The authors then further demonstrate that it is not even required to perform the

DFTs if one selects the cyclotomic field to be of the form: *X* <sup>2</sup>*<sup>n</sup>* + 1. The authors illustrate this by using a recursive approach to deduce two constants from the secret key which subsequently facilitates the key generation algorithm to construct a valid associated public key. The key generation method of Gentry and Halevi (Gentry & Halevi, 2011) is fast. However, the scheme appears particularly tailored to work with two-power roots of unity.

Researchers have also examined ways of improving key generation in fully homomorphic encryption schemes. For example, in (Ogura et al., 2010), a method is proposed for construction of keys for essentially random number fields by pulling random elements and analyzing *eigenvalues* of the corresponding matrices. However, this method is unable to achieve the improvement in efficiency in terms of *reduced ciphertext blow up* as done in (Smart & Vercau‐ teren, 2010) and (Gentry & Halevi, 2011).

Stehle and Steinfield improved Gentry's fully homomorphic scheme and obtained a faster fully homomorphic scheme with *O*(*n*3.5) bits complexity per elementary binary addition/multipli‐ cation gate (Stehle & Steinfeld, 2010). However, the hardness assumption of the security of the scheme is stronger than that of Gentry's scheme (Gentry, 2009). The improved complexity of the proposed scheme stems from two sources. First, the authors have given a more aggressive security analysis of the *sparse subset sum problem* (SSSP) against lattice attacks as compared to the analysis presented in (Gentry, 2009). The SSSP along with the ideal lattice *bounded distance decoding* (BDD) problem are the two problems underlying the security of Gentry's fully homomorphic scheme. In his security analysis of BDD, Gentry has used the best known complexity bound for the approximate *shortest vector problem* (SVP) in lattices. However, in analyzing SSSP, Gentry has assumed the availability of an exact SVP oracle. On the contrary, the finer analysis of Stehle and Steinfield for SSSP takes into account the complexity of approximate SVP, thereby making it more consistent with the assumption underlying the analysis of the BDD problem. This leads to choices of smaller parameter in the scheme. Moreover, Stehle and Steinfield have relaxed the definition of fully homomorphic encryption to allow for a negligible but non-zero probability of decryption error. They have shown that the randomness in the *SplitKey* key generation for the *squashed decryption algorithm* (i.e., the decryption algorithms of the bootstrappable scheme) in the Gentry's scheme can be gainfully exploited to allow a negligible decryption error probability. This decryption error, although negligible in value, can lead to rounding precision used in representing the ciphertext components that is almost half the value of the precision as achieved in Gentry's scheme (Gentry, 2009), which involves zero error probability.

(Gentry, 2009). It produces a fully homomorphic scheme from a *somewhat homomorphic scheme*. For a somewhat homomorphic scheme, the public and the private keys consist of two large integers (one of which is shared by both the public and the private key), and the ciphertext consists of one large integer. The scheme (Smart & Vercauteren, 2010) has smaller *ciphertext blow up* and reduced key size than in Gentry's scheme based on ideal lattices. Moreover, the scheme also allows and efficient homomorphic encryption over any field of characteristics two. More specifically, it uses arithmetic of *cyclotomic number fields*. In particular, the authors have

that the scheme could be applied with arbitrary (even non-cyclotomic) number fields as well. In spite of having many advantages, the major problem with this scheme is that the key

Gentry and Halevi presented a novel implementation approach for the variant of Smart and Vercauteren proposition (Smart & Vercauteren, 2010), which had a greatly improved key generation phase (Gentry & Halevi, 2011). In particular, the authors have noted that the key generation (for cyclotomic fields) is essentially an application of a *Discrete Fourier Transform* (DFT), followed by a small quantum of computation, and then application of the *inverse* transform. The authors then further demonstrate that it is not even required to perform the

by using a recursive approach to deduce two constants from the secret key which subsequently facilitates the key generation algorithm to construct a valid associated public key. The key generation method of Gentry and Halevi (Gentry & Halevi, 2011) is fast. However, the scheme

Researchers have also examined ways of improving key generation in fully homomorphic encryption schemes. For example, in (Ogura et al., 2010), a method is proposed for construction of keys for essentially random number fields by pulling random elements and analyzing *eigenvalues* of the corresponding matrices. However, this method is unable to achieve the improvement in efficiency in terms of *reduced ciphertext blow up* as done in (Smart & Vercau‐

Stehle and Steinfield improved Gentry's fully homomorphic scheme and obtained a faster fully homomorphic scheme with *O*(*n*3.5) bits complexity per elementary binary addition/multipli‐ cation gate (Stehle & Steinfeld, 2010). However, the hardness assumption of the security of the scheme is stronger than that of Gentry's scheme (Gentry, 2009). The improved complexity of the proposed scheme stems from two sources. First, the authors have given a more aggressive security analysis of the *sparse subset sum problem* (SSSP) against lattice attacks as compared to the analysis presented in (Gentry, 2009). The SSSP along with the ideal lattice *bounded distance decoding* (BDD) problem are the two problems underlying the security of Gentry's fully homomorphic scheme. In his security analysis of BDD, Gentry has used the best known complexity bound for the approximate *shortest vector problem* (SVP) in lattices. However, in analyzing SSSP, Gentry has assumed the availability of an exact SVP oracle. On the contrary, the finer analysis of Stehle and Steinfield for SSSP takes into account the complexity of approximate SVP, thereby making it more consistent with the assumption underlying the analysis of the BDD problem. This leads to choices of smaller parameter in the scheme.

+ 1. However, they also noted

+ 1. The authors illustrate this

focused on the field generated by the polynomial: *F* (*X* )= *X* <sup>2</sup>*<sup>n</sup>*

18 Theory and Practice of Cryptography and Network Security Protocols and Technologies

DFTs if one selects the cyclotomic field to be of the form: *X* <sup>2</sup>*<sup>n</sup>*

appears particularly tailored to work with two-power roots of unity.

generation method is very slow.

teren, 2010) and (Gentry & Halevi, 2011).

In (Chunsheng, 2012), Chunsheng proposed a modification of the fully homomorphic encryp‐ tion scheme of Smart and Vercauteren (Smart & Vercauteren, 2010). The author has applied a *self-loop bootstrappable technique* so that the security of the modified scheme only depends on the hardness of the *polynomial coset problem* and does not require any assumption of the *sparse subset problem* as required in the original work of Smart and Vercauteren (Smart & Vercauteren, 2010). In addition, the author have constructed a *non-self-loop fully homomorphic encryption scheme* that uses *cycle keys*. In a nutshell, the security of the improved fully homomorphic encryption scheme in this work is based on use of three mathematical approaches: (i) hardness of factoring integer problem, (ii) solving Diophantine equation problem, and (iii) finding approximate greatest common divisor problem.

Boneh and Freeman propose a linearly homomorphic signature scheme that authenticates vector subspaces of a given ambient space (Boneh & Freeman, 2011). The scheme has several novel features that were not present in any of the existing similar schemes. First, the scheme is the first of its kind that enables *authentication of vectors over binary fields*; previous schemes could not authenticate vectors with large or growing coefficients. Second, the scheme is the only scheme that is based on the *problem of finding short vectors in integer lattices*, and therefore, it enjoys the worst-case security guarantee that is common to *lattice-based cryptosystems*. The scheme can be used to authenticate linear transformations of signed data, such as those arising when computing mean and Fourier transform or in networks that use *network coding* (Boneh & Freeman, 2011). The work has three major contributions in the state of the art as identified by the authors: (i) *Homomorphic signatures over F2*: the authors have constructed the first *unforgeable linearly homomorphic signature scheme* that authenticates vectors with coordinates in **F2**. It is an example of a cryptographic primitive that can be built using lattice models, but cannot be built using bilinear maps or other traditional algebraic methods based on factoring or discrete log type problems. The scheme can be modified to authenticate vectors with coefficients in other small fields, including prime fields and extension fields such as **F**2**<sup>d</sup>**.

Moreover, the scheme is private, in the sense that a derived signature on a vector **v** leaks no information about the original signed vectors beyond what is revealed by **v**. (ii) *A simple k-time signature without random oracles:* the authors have presented a stateless signature scheme and have proved that it is secure in the standard model when used to sign at most *k* messages, for small values of *k*. The public key of the scheme is significantly smaller than that of any other stateless lattice-based signature scheme that can sign multiple large messages and is secure in the standard model. The construction proposed by the authors can be viewed as *removing the random oracle* from the signature scheme of Gentry, Peikert, and Vaikuntanathan (Gentry et al., 2008), but only for signing *k* messages (Boneh & Freeman, 2011). (iii) *New tools for lattice-based signatures:* the scheme is unforgeable based on a new hard problem on lattices, which the authors have called the *k*-*small integer solutions* (*k*-SIS) problem. The authors have shown that *k*-SIS reduces to the *small integer solution* (SIS) problem, which is known to be as hard as standard worst-case lattice problems (Micciancio & Regev, 2007).

It is also interesting to extend the definition of non-malleability to allow for *chosen cipher-text attacks*. As an example, we consider the problem that involves *implementing an encrypted targeted advertisement system that generates advertisements depending on the contents of a user's e-mail*. Since the e-mail is stored in an encrypted form with the user's public key, the e-mail server performs a homomorphic evaluation and computes an encrypted advertisement to be sent back to the user. The user decrypts it, performs an action depending on what she sees. If the advertisement is relevant, she might choose to click on it; otherwise, she simply discards it. However, if the e-mail server is aware to this information, namely whether the user clicked on the advertise‐ ment or not, it can use this as a restricted *decryption oracle* to break the security of the user's encryption scheme and possibly even recover her secret key. Such attacks are ubiquitous whenever we compute on encrypted data, almost to the point that CCA security seems inevitable. Yet, it is easy to see that chosen ciphertext (CCA2-secure) homomorphic encryption schemes cannot exist. Therefore, an appropriate security definition and constructions that

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

21

**Fully homomorphic encryption and functional decryption:** Homomorphic encryption schemes permit anyone to evaluate functions on encrypted data, but the evaluators never see any information about the result. It is possible to construct an encryption scheme where a user can compute *f(m)* from an encryption of a message *m*, but she should not be able to learn any other information about *m* (including the intermediate results in the computation of *f*)? Essentially, the issue boils down to the following question: *can we control the information that the evaluator can see?* Such an encryption scheme is called a *functional encryption scheme.* The concept of functional encryption scheme was first introduced by Sahai and Waters (Sahai & Waters, 2005) and subsequently investigated in a number of intriguing works (Katz et al., 2013; Lewko et al., 2010; Boneh et al., 2011; Agrawal et al., 2011). Although the constructions in these propositions work for several interesting families of functions (such as monotone formulas and inner products), construction of a fully functional encryption scheme is still not achieved and remains as an open problem. What we need is a novel and generic encryption system that provides us with fine-grained control over what one can see and access and what

**Other problems and applications:** Another important open question relates to the assump‐ tions underlying the current fully homomorphic encryption systems. All known fully homo‐ morphic encryption schemes are based on *hardness of lattice problems*. The natural question that arises - can we construct fully homomorphic from other approaches – say, for example, from number-theoretic assumptions? Can we bring in the issue of the hardness of factoring or

In addition to the scenarios where it is beneficial to keep all data encrypted and to perform computations on encrypted data, fully homomorphic encryption can be gainfully exploited to solve a number of practical problems in cryptography. Two such examples are the problems of *verifiably outsourcing computation* (Goldwasser et al., 2008; Gennaro et al., 2010; Chung et al., 2010; Applebaum et al., 2010) and *constructing short non-interactive zero-knowledg e proofs* (Gentry, 2009). Some of the applications of fully homomorphic encryption do not require its full power. For example, in *private information retrieval* (PIR), it is sufficient to have a somewhat

achieve the definition is in demand.

one can compute on data to get a desired output.

discrete logarithms in this problem?

#### **7. Conclusion and future trends**

The study of fully homomorphic encryption has led to a number of new and exciting concepts and questions, as well as a powerful tool-kit to address them. We conclude the chapter by discussing a number of research directions related to the domain of fully homomorphic encryption and more generally, on the problem of computing on encrypted data.

**Applications of fully homomorphic encryption:** While Gentry's original construction was considered as being infeasible for practical deployments, recent constructions and implemen‐ tation efforts have drastically improved the efficiency of fully homomorphic encryption (Vaikuntanathan, 2011). The initial implementation efforts focused on Gentry's original scheme and its variants (Smart & Vercauteren, 2010; Smart & Vercauteren, 2012; Coron et al., 2011; Gentry & Halevi, 2011), which seemed to pose rather inherent efficiency bottlenecks. Later implementations leverage the recent algorithmic advances (Brakerski & Vaikuntanathan, 2011; Brakerski et al., 2011; Brakerski & Vaikuntanathan, 2011a) that result in asymptotically better fully homomorphic encryption systems, as well as new algebraic mechanisms to improve the overall efficiency of these schemes ( Naehrig et al., 2011; Gentry et al., 2012; Smart & Vercauteren, 2012).

**Non-malleability and homomorphic encryption:** Homomorphism and *non-malleability* are two orthogonal properties of an encryption scheme. Homomorphic encryption schemes permit anyone to transform an encryption of a message *m* into an encryption of *f(m)* for nontrivial functions *f*. Non-malleable encryption, on the other hand, prevents precisely this sort of thing- it requires that no adversary be able to transform an encryption of *m* into an encryption of any *related* message. Essentially, what we need is a combination of both the properties that *selectively permit homomorphic computations* (Vaikuntanathan, 2011). This implies that the evaluator should be able to homomorphically compute any function from some pre-specified class *Fhom*; however, she should not be able to transform an encryption of *m* into an encryption of *f(m)* for which *f* ∈ *Fhom* does not hold good (i.e., *f* does not belong to *Fhom*). The natural question that arises is: *whether we can control what is being (homomorphically) computed*?

Answering this question turns out to be tricky. Boneh, Segev and Waters (Boneh et al., 2011) propose the notion of *targeted malleability* – a possible formalization of such a requirement as well as formal constructions of such encryption schemes. Their encryption scheme is based on a strong *knowledge of exponent-type* assumption that allows iterative evaluation of at most *t* functions, where *t* is a suitably determined and pre-specified constant. Improving their construction as well as the underlying complexity assumptions is an important open problem (Vaikuntanathan, 2011).

It is also interesting to extend the definition of non-malleability to allow for *chosen cipher-text attacks*. As an example, we consider the problem that involves *implementing an encrypted targeted advertisement system that generates advertisements depending on the contents of a user's e-mail*. Since the e-mail is stored in an encrypted form with the user's public key, the e-mail server performs a homomorphic evaluation and computes an encrypted advertisement to be sent back to the user. The user decrypts it, performs an action depending on what she sees. If the advertisement is relevant, she might choose to click on it; otherwise, she simply discards it. However, if the e-mail server is aware to this information, namely whether the user clicked on the advertise‐ ment or not, it can use this as a restricted *decryption oracle* to break the security of the user's encryption scheme and possibly even recover her secret key. Such attacks are ubiquitous whenever we compute on encrypted data, almost to the point that CCA security seems inevitable. Yet, it is easy to see that chosen ciphertext (CCA2-secure) homomorphic encryption schemes cannot exist. Therefore, an appropriate security definition and constructions that achieve the definition is in demand.

2008), but only for signing *k* messages (Boneh & Freeman, 2011). (iii) *New tools for lattice-based signatures:* the scheme is unforgeable based on a new hard problem on lattices, which the authors have called the *k*-*small integer solutions* (*k*-SIS) problem. The authors have shown that *k*-SIS reduces to the *small integer solution* (SIS) problem, which is known to be as hard as

The study of fully homomorphic encryption has led to a number of new and exciting concepts and questions, as well as a powerful tool-kit to address them. We conclude the chapter by discussing a number of research directions related to the domain of fully homomorphic

**Applications of fully homomorphic encryption:** While Gentry's original construction was considered as being infeasible for practical deployments, recent constructions and implemen‐ tation efforts have drastically improved the efficiency of fully homomorphic encryption (Vaikuntanathan, 2011). The initial implementation efforts focused on Gentry's original scheme and its variants (Smart & Vercauteren, 2010; Smart & Vercauteren, 2012; Coron et al., 2011; Gentry & Halevi, 2011), which seemed to pose rather inherent efficiency bottlenecks. Later implementations leverage the recent algorithmic advances (Brakerski & Vaikuntanathan, 2011; Brakerski et al., 2011; Brakerski & Vaikuntanathan, 2011a) that result in asymptotically better fully homomorphic encryption systems, as well as new algebraic mechanisms to improve the overall efficiency of these schemes ( Naehrig et al., 2011; Gentry et al., 2012; Smart

**Non-malleability and homomorphic encryption:** Homomorphism and *non-malleability* are two orthogonal properties of an encryption scheme. Homomorphic encryption schemes permit anyone to transform an encryption of a message *m* into an encryption of *f(m)* for nontrivial functions *f*. Non-malleable encryption, on the other hand, prevents precisely this sort of thing- it requires that no adversary be able to transform an encryption of *m* into an encryption of any *related* message. Essentially, what we need is a combination of both the properties that *selectively permit homomorphic computations* (Vaikuntanathan, 2011). This implies that the evaluator should be able to homomorphically compute any function from some pre-specified class *Fhom*; however, she should not be able to transform an encryption of *m* into an encryption of *f(m)* for which *f* ∈ *Fhom* does not hold good (i.e., *f* does not belong to *Fhom*). The natural

question that arises is: *whether we can control what is being (homomorphically) computed*?

Answering this question turns out to be tricky. Boneh, Segev and Waters (Boneh et al., 2011) propose the notion of *targeted malleability* – a possible formalization of such a requirement as well as formal constructions of such encryption schemes. Their encryption scheme is based on a strong *knowledge of exponent-type* assumption that allows iterative evaluation of at most *t* functions, where *t* is a suitably determined and pre-specified constant. Improving their construction as well as the underlying complexity assumptions is an important open problem

encryption and more generally, on the problem of computing on encrypted data.

standard worst-case lattice problems (Micciancio & Regev, 2007).

20 Theory and Practice of Cryptography and Network Security Protocols and Technologies

**7. Conclusion and future trends**

& Vercauteren, 2012).

(Vaikuntanathan, 2011).

**Fully homomorphic encryption and functional decryption:** Homomorphic encryption schemes permit anyone to evaluate functions on encrypted data, but the evaluators never see any information about the result. It is possible to construct an encryption scheme where a user can compute *f(m)* from an encryption of a message *m*, but she should not be able to learn any other information about *m* (including the intermediate results in the computation of *f*)? Essentially, the issue boils down to the following question: *can we control the information that the evaluator can see?* Such an encryption scheme is called a *functional encryption scheme.* The concept of functional encryption scheme was first introduced by Sahai and Waters (Sahai & Waters, 2005) and subsequently investigated in a number of intriguing works (Katz et al., 2013; Lewko et al., 2010; Boneh et al., 2011; Agrawal et al., 2011). Although the constructions in these propositions work for several interesting families of functions (such as monotone formulas and inner products), construction of a fully functional encryption scheme is still not achieved and remains as an open problem. What we need is a novel and generic encryption system that provides us with fine-grained control over what one can see and access and what one can compute on data to get a desired output.

**Other problems and applications:** Another important open question relates to the assump‐ tions underlying the current fully homomorphic encryption systems. All known fully homo‐ morphic encryption schemes are based on *hardness of lattice problems*. The natural question that arises - can we construct fully homomorphic from other approaches – say, for example, from number-theoretic assumptions? Can we bring in the issue of the hardness of factoring or discrete logarithms in this problem?

In addition to the scenarios where it is beneficial to keep all data encrypted and to perform computations on encrypted data, fully homomorphic encryption can be gainfully exploited to solve a number of practical problems in cryptography. Two such examples are the problems of *verifiably outsourcing computation* (Goldwasser et al., 2008; Gennaro et al., 2010; Chung et al., 2010; Applebaum et al., 2010) and *constructing short non-interactive zero-knowledg e proofs* (Gentry, 2009). Some of the applications of fully homomorphic encryption do not require its full power. For example, in *private information retrieval* (PIR), it is sufficient to have a somewhat homomorphic encryption scheme that is capable of evaluating simple database indexing functions. For this applications, what is needed is an optimized and less functional encryption scheme that is more efficient than a fully homomorphic encryption function. Design of such functions for different application scenarios is also a current hot topic of research.

[8] Benaloh, J. (1994). Dense Probabilistic Encryption. In: Proceedings of the Workshop

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

23

[9] Benaloh, J. (1988). Verifiable Secret-Ballot Elections. Doctoral Dissertation, Depart‐ ment of Computer Science, Yale University, New Haven, Connecticut, USA.

[10] Ben-Or, M. & Cleve, R. (1992). Computing Algebraic Formulas Using a Constant Number of Registers. SIAM Journal on Computing, Vol 21, No 1, pp. 54-58, 1992. [11] Blum, M. & Goldwasser, S. (1985). An Efficient Probabilistic Public-Key Encryption Scheme which Hides All Partial Information. In: Advances in Cryptology – Proceed‐ ings of EUROCRYPT'84, Lecture Notes in Computer Science (LNCS), Vol 196,

[12] Boneh, D. & Freeman, D. M. (2011). Linearly Homomorphic Signatures over Binary Fields and New Tools for Lattice-Based Signatures. In: Public Key Cryptography (PKC'11), Lecture Notes in Computer Science (LNCS), Vol 6571, Springer-Verlag, pp.

[13] Boneh, D. & Lipton, R. (1996). Searching for Elements in Black Box Fields and Appli‐ cations. In: Advances in Cryptology- Proceedings of CRYPTO'96, Lecture Notes in

[14] Boneh, D., Segev, G., & Waters, B. (2012). Targeted Malleability: Homomorphic En‐ cryption for Restricted Computations. In: Proceedings of Innovations in Theoretical

Computer Science (ITCS), pp 350-366, ACM Press, New York, NY, USA, 2012.

[15] Brakerski, Z., Gentry, C., & Vaikuntanathan, V. (2011). Fully Homomorphic Encryp‐ tion without Bootstrapping. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS'12), pp. 309-325, ACM Press, New York, NY,

[16] Brakerski, Z. & Vaikuntanathan, V. (2011). Fully Homomorphic Encryption from Ring-LWE and Security for Key Dependent Messages. In: Advances in Cryptology-Proceedings of CRYPTO'11, Lecture Notes in Computer Science (LNCS), Vol 6841,

[17] Brakerski, Z. & Vaikuntanathan, V. (2011a). Efficient Fully Homomorphic Encryption from (Standard) LWE. In: Proceedings of the IEEE 52nd Annual Symposium on Foun‐ dations of Computer Science (FOCS'11), pp. 97-106, ACM Press, New York, NY,

[18] Bresson, E., Catalano, D., & Pointcheval, D. (2003). A Simple Public-Key Cryptosys‐ tem with a Double Trapdoor Decryption Mechanism and its Applications. In: Advan‐ ces in Cryptology- Proceedings of ASIACRYPT'03, Lecture Notes in Computer

Science (LNCS), Vol 2894, Springer-Verlag, pp. 37-54.

Computer Science (LNCS), Vol 1109, Springer-Verlag, pp. 283-297.

on Selected Areas of Cryptography, 1994, pp. 120-128.

Springer-Verlag, pp. 289-299.

Springer-Verlag, pp. 505-524.

1-16.

USA.

USA.

#### **Author details**

#### Jaydip Sen\*

Department of Computer Science, National Institute of Science & Technology, Odisha, India

#### **References**


[8] Benaloh, J. (1994). Dense Probabilistic Encryption. In: Proceedings of the Workshop on Selected Areas of Cryptography, 1994, pp. 120-128.

homomorphic encryption scheme that is capable of evaluating simple database indexing functions. For this applications, what is needed is an optimized and less functional encryption scheme that is more efficient than a fully homomorphic encryption function. Design of such

Department of Computer Science, National Institute of Science & Technology, Odisha, India

[1] Adelsbach, A., Katzenbeisser, S., & Sadeghi, A. (2002). Cryptography Meets Water‐ marking: Detecting Watermarks with Minimal or Zero Knowledge Disclosure. In: Proceedings of the European Signal Processing Conference (EUSIPCO'02), Vol 1, pp.

[2] Agrawal, S., Freeman, D. M., & Vaikuntanathan, V. (2011). Functional Encryption for Inner Product Predicates from Learning with Errors. In: Advances in Cryptology-Proceedings of ASIACRYPT'11, Lecture Notes in Computer Science (LNCS), Vol

[3] Ajtai, M. & Dwork, C. (1997). A Public Key Cryptosystem with Worst-Case/ Average-Case Equivalence. In: Proceedings of the 29th Annual ACM International Symposium on Theory of Computing (STOC'97), pp. 284-293, ACM Press, New York, NY, USA.

[4] Applebaum, B., Ishai, Y., & Kushilevitz, E. (2010). Semantic Security under Related-Key Attacks and Applications. Innovations in Computer Science (ICS), pp. 45-55,

[5] Applebaum, B., Ishai, Y., & Kushilevitz, E. (2010). From Secrecy to Soundness: Effi‐ cient Verification via Secure Computation. In: Automata, Language and Program‐ ming - Proceedings of ICALP, Lecture Notes in Computer Science (LNCS), Vol 6198,

[6] Bao, F. (2003). Cryptanalysis of a Provable Secure Additive and Multiplicative Priva‐ cy Homomorphism. In: Proceedings of International Workshop on Coding and Cryp‐

[7] Bellare, M. & Rogaway, P. (1995). Optimal Asymmetric Encryption- How to Encrypt with RSA. In: Advances in Cryptology - Proceedings of EUROCRYPT'94, Lecture

Notes in Computer Science (LNCS), Vol 950, Springer-Verlag, pp. 92-111.

functions for different application scenarios is also a current hot topic of research.

22 Theory and Practice of Cryptography and Network Security Protocols and Technologies

**Author details**

Jaydip Sen\*

**References**

2011.

446-449, Toulouse, France.

7073, Springer-Verlag, pp. 21-40.

Springer-Verlag, pp. 152-163.

tography (WCC'03), Versailles, France, pp. 43-49.


[19] Brickell, E. F. & Yacobi, Y. (1987). On Privacy Homomorphisms. In: Advances in Cryptology – Proceedings of EUROCRYPT 1987, Lecture Notes in Computer Science (LNCS) Vol 304, Springer-Verlag, pp. 117-125.

[30] Daemen, J. & Rijmen, V. (2000). The Block Cipher Rijndael. In: Proceedings of Inter‐ national Conference on Smart Cards Research and Applications (CARDS'98), Lecture

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

25

[31] Damgard, I. & Jurik, M. (2003). A Length-Flexible Threshold Cryptosystem with Ap‐ plications. In: Proceedings of the 8th Australasian Conference on Information Security and Privacy (ACSIP'03), Lecture Notes in Computer Science (LNCS), Vol 2727,

[32] Damgard, I. & Jurik, M. (2001). A Generalisation, a Simplification and Some Applica‐ tions of Paillier's Probabilistic Public-Key System. In: Proceedings of the 4th Interna‐ tional Workshop on Practice and Theory in Public Key Cryptography (PKC'01), Lecture Notes in Computer Science (LNCS), Vol 1992, Springer-Verlag, pp. 119-136. [33] Damgard, I., Jurik, M., & Nielsen, J. (2010). A Generalization of Paillier's Public-Key System with Applications to Electronic Voting. International Journal on Information Security (IJIS), Special Issues on Special Purpose Protocol, Vol 9, Issue 6, December

[34] Diffie, W. & Hellman, M. (1976). New Directions in Cryptography. IEEE Transactions

[35] Domingo-Ferrer, J. (2002). A Provably Secure Additive and Multiplicative Privacy Homomorphism. In: Proceedings of the 5th International Conference on Information Security (ISC'02), Lecture Notes in Computer Science (LNCS), Vol 2433, Springer-

[36] Ekdahl, E. & Johansson, T. (2002). A New Version of the Stream Cipher SNOW. In: Proceedings of the 9th International Workshop on Selected Areas of Cryptography (SAC'02), Lecture Notes in Computer Science (LNCS), Vol 2595, Springer-Verlag, pp.

[37] ElGamal, T. (1985). A Public Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms. IEEE Transactions on Information Theory, Vol 31, Issue 4, July

[38] Feigenbaum, J. & Merritt, M. (1991). Open Questions, Talk Abstracts, and Summary of Discussions. DIMACS Series in Discrete Mathematics and Theoretical Computer

[39] Fellows, M. & Koblitz, N. (1993). Combinatorial Cryptosystems Galore! Finite Fields-Theory, Applications and Algorithms. Contemporary Mathematics, Vol. 168, Las Ve‐

[40] Fontaine, C. & Galand, F. (2007). A Survey of Homomorphic Encryption for Nonspe‐ cialists. EURASIP Journal on Information Security, Vol 2007, January 2007, Article ID 15, Hindawi Publishing Corporation, New York, NY, USA. DOI: 10.1155/2007/13801.

[41] Fouque, P., Poupard, G., & Stern, J. (2000). Sharing Decryption in the Context of Vot‐ ing or Lotteries. In: Proceedings of the 4th International Conference on Financial

2010, pp. 371-385, Springer-Verlag, Heidelberg, Berlin, Germany.

on Information Theory, Vol 22, No 6, November 1976, pp. 644-654.

Notes in Computer Science (LNCS), Vol 1820, Springer-Verlag, pp. 247-256.

Springer-Verlag, pp 350-364.

Verlag, pp. 471-483.

1985, pp. 469-472.

Science, Vol 2, pp. 1-45.

gas, 1994, pp. 51-61.

47-61.


[30] Daemen, J. & Rijmen, V. (2000). The Block Cipher Rijndael. In: Proceedings of Inter‐ national Conference on Smart Cards Research and Applications (CARDS'98), Lecture Notes in Computer Science (LNCS), Vol 1820, Springer-Verlag, pp. 247-256.

[19] Brickell, E. F. & Yacobi, Y. (1987). On Privacy Homomorphisms. In: Advances in Cryptology – Proceedings of EUROCRYPT 1987, Lecture Notes in Computer Science

[20] Canetti, R., Goldreich, O., & Halevi, S. (2004). The Random Oracle Methodology, Re‐ visited. Journal of ACM (JACM), Vol 5, Issue 4, July 2004, pp. 557-594, ACM Press,

[21] Castagnos, G. (2007). An Efficient Probabilistic Public-Key Cryptosystem over Quad‐ ratic Fields Quotients. Finite Fields and Their Applications, Vol 13, No 3, pp. 563-576,

[22] Castagnos, G. (2006). Quelques Schemas De Cryptographic Asymetrique Probabi‐ liste. Doctoral Dissertation, Universite De Limoges, 2006. Available Online at: http://

epublications.unilim.fr/theses/2006/castagnos-guilhem/castagnos-guilhem.pdf [23] Chung, K.-M., Kalai, Y. & Vadhan, S. (2010). Improved Delegation of Computation Using Fully Homomorphic Encryption. In: Advances in Cryptology - Proceedings of CRYPTO'10, Lecture Notes in Computer Science (LNCS), Vol 6223, Springer-Verlag,

[24] Chunsheng, G. (2012). More Practical Fully Homomorphic Encryption. International Journal of Cloud Computing and Services Science, Vol 1, Issue 4, pp. 199-201.

[25] Coron, J.-S., Mandal, A., Naccache, D., & Tibouchi, M. (2011). Fully Homomorphic Encryption over the Integers with Shorter Public Keys. In: Advances in Cryptology - Proceedings of CRYPTO'11, Lecture Notes in Computer Science (LNCS), Vol 6841,

[26] Cramer, R. & Damgard, I. (1998). Zero-Knowledge Proofs for Finite Field Arithmetic, Or: Can Zero-Knowledge be for Free? In: Advances in Cryptology - Proceedings of CRYPTO'98, Lecture Notes in Computer Science (LNCS), Vol 1462, Springer-Verlag,

[27] Cramer, R., Damgard, I., & Maurer, U. (2000). General Secure Multi-party Computa‐ tion from any Linear Secret-Sharing Scheme. In: Advances in Cryptology – Proceed‐ ings of EUROCRYPT'00, Lecture Notes in Computer Science (LNCS), Vol 1807,

[28] Cramer, R. & Shoup, V. (2002). Universal Hash Proofs and a Paradigm for Adaptive Chosen Ciphertext Secure Public-Key Encryption. In: Advances in Cryptology – Pro‐ ceedings of EUROCRYPT'02, Lecture Notes in Computer Science (LNCS), Vol 2332,

[29] Daemen, J. & Rijmen, V. (2002). The Design of Rijndael: AES- The Advanced Encryp‐ tion Standard. Information Security and Cryptography, Springer, New York, NY,

(LNCS) Vol 304, Springer-Verlag, pp. 117-125.

24 Theory and Practice of Cryptography and Network Security Protocols and Technologies

New York, NY, USA.

July 2007.

pp. 483-501.

pp. 424-441.

USA, 2002.

Springer-Verlag, pp. 487-504.

Springer-Verlag, pp. 316-334.

Springer-Verlag, New York, NY, USA, pp. 45-64.


Cryptography (FC'00), Lecture Notes in Computer Science (LNCS), Vol 1962, Spring‐ er-Verlag, pp. 90-104.

[53] Golle, P., Jakobsson, M., Juels, A., & Syverson, P. (2004). Universal Re-Encryption for Mixnets. In: Topics in Cryptology - Proceedings of the RSA Conference Cryptogra‐ phers' Track (CT-RSA'04), Lecture Notes in Computer Science (LNCS), Vol 2964,

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

27

[54] Grigoriev, D. & Ponomarenko. (2006). Homomorphic Public-Key Cryptosystems and Encrypting Boolean Circuits. Applicable Algebra in Engineering, Communication

[55] Grigoriev, D. & Ponomarenko, I. (2004). Homomorphic Public-Key Cryptosystems over Groups and Rings. Quaderni di Mathematica, Vol 13, pp. 304-325, 2004.

[56] Groth, J. (2004). Rerandomizable and Replayable Adaptive Chosen Ciphertext Attack Secure Cryptosystems. In: Proceedings of the 1st Theory of Cryptography Conference (TCC'04), Lecture Notes in Computer Science (LNCS), Vol 2951, Springer-Verlag, pp.

[57] Hoffstein, J., Pipher, J., & Silverman, J. (1998). NTRU: A Ring-Based Public Key Cryp‐ tosystem. In: Proceedings of the 3rd International Symposium on Algorithmic Num‐ ber Theory (ANTS-III), ANTS'98, Lecture Notes in Computer Science (LNCS), Vol

[58] Katz, J. Sahai, A., & Waters, B. (2013). Predicate Encryption Supporting Disjunctions, Polynomial Equations, and Inner Products. Journal of Cryptology, Vol 26, Issue 2,

[59] Ko, K. H., Lee, S. J. Cheon, J. H., Han, J. W., Kang, J.-S., & Park, C. (2000). New Pub‐ lic-Key Cryptosystem Using Braid Groups. In: Advances in Cryptology – Proceed‐ ings of CRYPTO'00, Lecture Notes in Computer Science (LNCS), Vol 1880, Springer-

[60] Koblitz, N. (1998). Algebraic Aspects of Cryptography: Algorithms and Computation in Mathematics, Vol 3, Springer-Verlag, Berlin, Heidelberg, Germany, 1998.

[61] Lewko, A. B., Okamoto, T., Sahai, A. Takashima, K. & Waters, B. (2010). Fully Secure Functional Encryption: Attribute-Based Encryption and (Hierarchical) Inner Product Encryption. In: Advances in Cryptology- Proceedings of EUROCRYPT'10, Lecture

[62] Lipmaa, H. (2003). Verifiable Homomorphic Oblivious Transfer and Private Equality Test. In: Advances in Cryptology- Proceedings of ASIACRYPT'03, Lecture Notes in

[63] Ly, L. V. (2002). Polly Two - A Public-Key Cryptosystem Based on Polly Cracker. Doctoral Dissertation, Ruhr-Universitat, Bochum, Germany, October 2002.

[64] Lyubashevsky, V. & Micciancio, D. (2008). Asymptotically Efficient Lattice-Based Digital Signatures. In: Proceedings of the 5th International Conference on Theory of

Notes in Computer Science (LNCS), Vol 6110, Springer-Verlag, pp. 62-91.

Computer Science (LNCS), Vol 2894, Springer-Verlag, pp. 416-433.

pp. 191-224, April 2013, Springer-Verlag, Berlin, Heidelberg, Germany.

and Computing, Vol 17, Issue 3-4, pp. 239-255, August 2006.

Springer-Verlag, pp. 163-178.

1423, Springer-Verlag, pp. 267-288.

Verlag, pp. 166-183.

152-170.


[53] Golle, P., Jakobsson, M., Juels, A., & Syverson, P. (2004). Universal Re-Encryption for Mixnets. In: Topics in Cryptology - Proceedings of the RSA Conference Cryptogra‐ phers' Track (CT-RSA'04), Lecture Notes in Computer Science (LNCS), Vol 2964, Springer-Verlag, pp. 163-178.

Cryptography (FC'00), Lecture Notes in Computer Science (LNCS), Vol 1962, Spring‐

[42] Galbraith, S. D. (2002). Elliptic Curve Paillier Schemes. Journal of Cryptology, Vol 15,

[43] Gennaro, R., Gentry, C., & Parno, B. (2010). Non-Interactive Verifiable Computing: Outsourcing Computation to Untrusted Workers. In: Advances in Cryptology-Pro‐ ceedings of CRYPTO'10, Lecture Notes in Computer Science (LNCS), Vol 6223,

[44] Gentry, C. (2010). Toward Basing Fully Homomorphic Encryption on Worst-Case Hardness. In: Advances in Cryptology- Proceedings of CRYPTO'10, Lecture Notes in

[45] Gentry, C. (2009). Fully Homomorphic Encryption Using Ideal Lattices. In: Proceed‐ ings of the 41st Annual ACM Symposium on Theory of Computing (STOC'09), pp.

[46] Gentry, C. & Halevi, S. (2011). Implementing Gentry's Fully-Homomorphic Encryp‐ tion Scheme. In: Advances in Cryptology - Proceedings of EUROCRYPT'11, Lecture

[47] Gentry, C, Halevi, S., & Smart, N. (2012). Better Bootstrapping in Fully Homomor‐ phic Encryption. In: Proceedings of the 15th International Conference on Practice and Theory in Public Key Cryptography (PKC'12), Lecture Notes in Computer Science

[48] Gentry, C., Peikert, C., & Vaikuntanathan, V. (2008). Trapdoors for Hard Lattices and New Cryptographic Constructions. In: Proceedings of the 40th Annual ACM Sympo‐ sium on Theory of Computing (STOC'08), pp. 197-206, ACM Press, New York, NY,

[49] Goldreich, O., Goldwasser, S., & Halevi, S. (1997). Public-Key Cryptosystems from Lattice Reduction Problems. In: Advances in Cryptology- Proceedings of CRYP‐ TO'97, Lecture Notes in Computer Science (LNCS), Vol 1294, Springer-Verlag, pp.

[50] Goldwasser, S., Kalai, Y. T., & Rothblum, G. N. (2008). Delegating Computation: In‐ teractive Proofs for Muggles. In: Proceedings of the 40th Annual ACM Symposium on Theory of Computing (STOC'08), pp. 113-122, ACM Press, New York, NY, USA.

[51] Goldwasser, S. & Micali, S. (1982). Probabilistic Encryption and How to Play Mental Poker Keeping Secret All Partial Information. In: Proceedings of the 14th Annual ACM Symposium on Theory of Computing (STOC'82), pp. 365-377, ACM Press,

[52] Goldwasser, S. & Micali, S. (1984). Probabilistic Encryption. Journal of Computer and

System Sciences, Vol 28, Issue 2, pp. 270-299, April 1984.

Note in Computer Science (LNCS), Vol 6632, Springer-Verlag, pp. 129-148.

Computer Science (LNCS), Vol 6223, Springer-Verlag, pp. 116-137.

er-Verlag, pp. 90-104.

No 2, pp. 129-138, August 2002.

26 Theory and Practice of Cryptography and Network Security Protocols and Technologies

Springer-Verlag, pp. 465-482.

169-178, ACM Press, New York, NY, USA.

(LNCS), Vol 7293, Springer-Verlag, pp. 1-16.

USA.

112-131.

New York, NY, USA.


Cryptography (TCC'08), Lecture Notes in Computer Science (LNCS), Vol 4948, Springer-Verlag, pp. 37-54.

vances in Information and Computer Security- Proceedings of the 5th International Conference on Advances in Information and Computer Security (IWSEC'10), Lecture

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

29

Notes in Computer Science (LNCS), Vol 6434, Springer-Verlag, pp. 70-83.

Notes in Computer Science (LNCS), Vol 1403, Springer-Verlag, pp. 308-318.

at: http://grouper.iee.org/groups/1363/StudyGroup/NewFam.html.

Verlag, pp. 470-485.

Francisco, California, USA.

(LNCS), Vol 1233, Springer-Verlag, pp. 88-102.

Vol 1807, Springer-Verlag, pp. 172-189.

[76] Okamoto, T. & Uchiyama, S. (1998). A New Public-Key Cryptosystem as Secure as Factoring. In: Advances in Cryptology- Proceedings of EUROCRYPT'98, Lecture

[77] Okamoto, T., Uchiyama, S., & Fujisaki, E. (2000). EPOC: Efficient Probabilistic Public-Key Encryption. Technical Report, 2000, Proposal to IEEE P1363a. Available Online

[78] Paeng, S.-H, Ha, K.-C., Kim, J. H., Chee, S., & Park, C. (2001). New Public Key Cryp‐ tosystem Using Finite Non Abelian Groups. In: Advances in Cryptology- Proceed‐ ings of CRYPTO'01, Lecture Notes in Computer Science (LNCS), Vol 2139, Springer-

[79] Paillier, P. (2007). Impossibility Proofs for RSA Signatures in the Standard Model. In: Topics in Cryptology - Proceedings of the RSA Conference Cryptographers' Track (CT-RSA'07), Lecture Notes in Computer Science (LNCS), Vol 4377, pp. 31-48, San

[80] Paillier, P. (1999). Public-Key Cryptosystems Based on Composite Degree Residuosi‐ ty Classes. In: Advances in Cryptology – Proceedings of EUROCRYPT'99, Lecture

[81] Pfitzmann, B. & Waidner, M. (1997). Anonymous Fingerprinting. In: Advances in Cryptology- Proceedings of the EUROCRYPT'97, Lecture Notes in Computer Science

[82] Peikert, C. & Rosen, A. (2007). Lattices that Admit Logarithmic Worst-Case to Aver‐ age-Case Connection Factors. In: Proceedings of the 39th Annual ACM Symposium

[83] Peikert, C. & Rosen, A. (2006). Efficient Collision-Resistant Hashing from Worst-Case Assumptions on Cyclic Lattices. In: Theory of Cryptography - Proceedings of the 3rd International Conference on Theory of Cryptography (TCC'06), Lecture Notes in

[84] Poupard, G. & Stern, J. (2000). Fair Encryption of RSA Keys. In: Advances in Cryptol‐ ogy- Proceedings of EUROCRYPT'00, Lecture Notes in Computer Science (LNCS),

[85] Rappe, D. (2004). Homomorphic Cryptosystems and their Applications. Doctoral

[86] Regev, O. (2005). On Lattices, Learning with Errors, Random Linear Codes, and Cryptography. In: Proceedings of the 37th Annual ACM Symposium on Theory of

on Theory of Computing (STOC'07), pp. 478-487, ACM Press, June 2007.

Computer Science (LNCS), Vol 3876, Springer-Verlag, pp. 145-166.

Dissertation. University of Dortmund, Dortmund, Germany.

Computing (STOC'05), pp. 84-93, ACM Press, New York, NY, USA.

Notes in Computer Science (LNCS), Vol 1592, Springer-Verlag, pp. 223-238.


vances in Information and Computer Security- Proceedings of the 5th International Conference on Advances in Information and Computer Security (IWSEC'10), Lecture Notes in Computer Science (LNCS), Vol 6434, Springer-Verlag, pp. 70-83.

[76] Okamoto, T. & Uchiyama, S. (1998). A New Public-Key Cryptosystem as Secure as Factoring. In: Advances in Cryptology- Proceedings of EUROCRYPT'98, Lecture Notes in Computer Science (LNCS), Vol 1403, Springer-Verlag, pp. 308-318.

Cryptography (TCC'08), Lecture Notes in Computer Science (LNCS), Vol 4948,

[65] Lyubashevsky, V. & Micciancio, D. (2006). Generalized Compact Knapsacks are Col‐ lision Resistant. In: Proceedings of the 33rd International Conference on Automata, Languages and Programming (ICALP'06), Lecture Notes in Computer Science

[66] Lyubashevsky, V., Micciancio, D., Peikert, C., & Rosen, A. (2008). SWIFT: A Modest Proposal for FFT Hashing. In: Proceedings of the 15th International Workshop on Fast Software Encryption (FSE'08), Lecture Notes in Computer Science (LNCS), Vol 5068,

[67] Lyubashevsky, V., Peikert, C., & Regev, O. (2010). On Ideal Lattices and Learning with Errors over Rings. In: Advances in Cryptology- Proceedings of EURO‐ CRYPT'10, Lecture Notes in Computer Science (LNCS), Vol 6110, Springer-Verlag,

[68] Menezes, A., Van Orschot, P. & Vanstone, S. (1997). Handbook of Applied Cryptog‐ raphy. CRC Press, USA. Available Online at: http://www.cacr.math.uwaterloo.ca/

[69] Micciancio, D. (2007). Generalized Compact Knapsacks, Cyclic Lattices, and Efficient One-Way Functions. Computational Complexity, Vol 16, No 4, pp. 365-411, Decem‐

[70] Micciancio, D. (2001). Improving Lattice Based Cryptosystems Using Hermite Nor‐ mal Form. In: Cryptography and Lattices - Proceedings of the International Confer‐ ence on Cryptography and Lattices (CaLC'01), Lecture Notes in Computer Science

[71] Micciancio, D. & Regev, O. (2007). Worst-Case to Average-Case Reductions Based on Gaussian Measures. SIAM Journal on Computing, Vol 37, Issue 1, pp. 267-302, April

[72] Naccache, D. & Stern, J. (1998). A New Public Key Cryptosystem Based on Higher Residues. In: Proceedings of the 5th ACM Conference on Computer and Communica‐

[73] Naehrig, M., Lauter, K., & Vaikuntanathan, V. (2011). Can Homomorphic Encryption be Practical? In: Proceedings of the 3rd ACM Workshop on Cloud Computing Securi‐

[74] Nguyen, P. & Stern, J. (1999). Cryptanalysis of the Ajtai-Dwork Cryptosystem. In: Advances in Cryptology – Proceedings of CRYPTO'98, Lecture Notes in Computer

[75] Ogura, N., Yamamoto, G., Kobayashi, T., & Uchiyama, S. (2010). An Improvement of Key Generation Algorithm for Gentry's Homomorphic Encryption Scheme. In: Ad‐

Science (LNCS), Springer-Verlag, Vol 1462, New York, NY, USA, pp. 223-242.

tions Security (CCS'98), pp. 59-66, ACM Press, New York, NY, USA.

Springer-Verlag, pp. 37-54.

Springer-Verlag, pp. 54-72.

pp. 1-23.

hac/.

ber 2007.

2007.

(LNCS), Vol 4052, Springer-Verlag, pp. 144-155.

28 Theory and Practice of Cryptography and Network Security Protocols and Technologies

(LNCS), Vol 2146, Springer-Verlag, pp. 126-145.

ty, pp. 113-124, ACM Press, New York, NY, USA.


[87] Rivest, R., Adleman, L., & Dertouzos, M. (1978a). On Data Banks and Privacy Homo‐ morphisms. Foundations of Secure Communication, pp. 169-177, Academic Press.

[99] Vernam, G. S. (1926). Cipher Printing Telegraph Systems for Secret Wire and Radio Telegraphic Communications. Journal of the American Institute of Electrical Engi‐

Homomorphic Encryption — Theory and Application

http://dx.doi.org/10.5772/56687

31

[100] Wagner, D. (2003). Cryptanalysis of an Algebraic Privacy Homomorphism. In: Pro‐ ceedings of the 6th International Conference on Information Security (ISC'03), Lecture

[101] Wagner, N. R. & Magyarik, M. R. (1985). A Public Key Cryptosystem Based on the Word Problem. In: Advances in Cryptology- Proceedings of CRYPTO'84, Lecture

Notes in Computer Science (LNCS), Vol 2851, Springer-Verlag, pp.234-239.

Notes in Computer Science (LNCS), Vol 196, Springer-Verlag, pp. 19-36.

neers, Vol 45, pp. 295-301.


[99] Vernam, G. S. (1926). Cipher Printing Telegraph Systems for Secret Wire and Radio Telegraphic Communications. Journal of the American Institute of Electrical Engi‐ neers, Vol 45, pp. 295-301.

[87] Rivest, R., Adleman, L., & Dertouzos, M. (1978a). On Data Banks and Privacy Homo‐ morphisms. Foundations of Secure Communication, pp. 169-177, Academic Press. [88] Rivest, R., Shamir, A., & Adleman, L. (1978b). A Method for Obtaining Digital Signa‐ tures and Public-Key Cryptosystems. Communications of the ACM, Vol 21, No 2, pp.

[89] Sahai, A. & Waters, B. (2005). Fuzzy Identity-Based Encryption. In: Advances in Cryptology - Proceedings of EUROCRYPT'05, Lecture Notes in Computer Science

[90] Sander, T. & Tschudin, C. F. (1998). Towards Mobile Cryptography. In: Proceedings of IEEE Symposium on Security & Privacy, Oakland, California, USA, pp. 215-224,

[91] Sander, T. & Tshudin, C. F. (1998a). Protecting Mobile Agents against Malicious Hosts. In: Proceedings of International Conference on Mobile Agents and Security, Lecture Notes in Computer Science (LNCS), Vol 1419, Springer-Verlag, pp. 44-60. [92] Sander, T., Young, A., & Yung, M. (1999). Non-Interactive CryptoComputing for NC. In: Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer

[93] Shannon, C. (1949). Communication Theory of Secrecy Systems. Bell System Techni‐

[94] Smart, N. P. & Vercauteren, F. (2010). Fully Homomorphic Encryption with Relative‐ ly Small Key and Ciphertext Sizes. In: Public Key Cryptography - Proceedings of the 13th International Conference on Practice and Theory in Public Key Cryptography (PKC'10), Lecture Notes in Computer Science (LNCS), Vol 6056, Springer-Verlag, pp.

[95] Smart, N. & Vercauteren. (2012). Fully Homomorphic SIMD Operations. Design Co‐

[96] Stehle, D. & Steinfeld, R. (2010). Faster Fully Homomorphic Encryption. In: Advan‐ ces in Cryptology – Proceedings of ASIACRYPT'10, Lecture Notes in Computer Sci‐

[97] Vaikuntanathan, V. (2011). Computing Blindfolded: New Developments in Fully Ho‐ momorphic Encryption. In: Proceedings of the IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS'11), pp. 5-16, IEEE Computer Society Press,

[98] Van Tilborg, H. C. A. & Jajodia, S. (Eds) (2011). Encyclopaedia of Cryptography and

(LNCS), Vol 3494, Springer-Verlag, pp. 457-473.

30 Theory and Practice of Cryptography and Network Security Protocols and Technologies

Science, pp. 564-566, October 1999.

cal Journal, Vol 28, Issue 4, pp. 656-715, October 1949.

des and Cryptography, Springer, USA, July 2012.

ence (LNCS), Vol 6477, Springer-Verlag, pp. 377-394.

Security. Springer-Verlag, New York, NY, USA, 2011.

120-126.

May 1998.

420-443.

Washington, DC, USA.


**Chapter 2**

**Optical Communication with**

**Weak Coherent Light Fields**

Kim Fook Lee, Yong Meng Sua and

Additional information is available at the end of the chapter

Entanglement and superposition are foundations for the emerging field of quantum commu‐ nication and information processing. These two fundamental features of quantum mechanics have made quantum key distribution unconditionally secure (Scarani et al., 2009; Weedbrook et al., 2010) compared to communication based on classical key distribution. Currently, implementation of an optical quantum communication is mainly based on discrete and continuous quantum variables. They are usually generated through nonlinear interaction processes in χ(2) (Kwiat et al., 1995) and χ(3) (Lee et al., 2006,2009) media. Discrete-variable qubit based implementations using polarization (Liang et al., 2006, 2007; Chen et al. 2007, 2008; Sharping et al., 2006) and time-bin (Brendel et al., 1999; Tittel et al., 1998, 1999) entanglement have difficulty to obtain unconditional-ness, and also usually have low optical data-rate because of post-selection technique with low probability of success in a low efficient single photon detector at telecom-band (Liang et al., 2005, 2006, 2007). Continuous-variable imple‐ mentations using quadrature entanglement (Yonezawa et al., 2004; Bowen et al., 2003; Silberhorn et al., 2002) and polarization squeezing (Korolkova et al., 2002) can have high efficiency and high optical data-rate because of available high speed and efficient homodyne detection. However, the quality of quadrature entanglement is very sensitive to loss, which is imperfect for implementing any entanglement based quantum protocols over long distance. Continuous-variable protocols that do not rely on entanglement, for instance, coherent-state based quantum communication (Yuen, 2004; Corndorf et al., 2003; Barbosa et al., 2003; Grosshans et al., 2002, 2003; Qi et al., 2007; Wilde Qi et al., 2008), are perfect for long distance optical communication. Several experimental approaches were taken to resolve transmission loss for long distance optical communication by using coherent light source. Optical wave

> © 2013 Lee et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Lee et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

Harith B. Ahmad

**1. Introduction**

http://dx.doi.org/10.5772/56375

**Chapter 2**

## **Optical Communication with Weak Coherent Light Fields**

Kim Fook Lee, Yong Meng Sua and Harith B. Ahmad

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/56375

#### **1. Introduction**

Entanglement and superposition are foundations for the emerging field of quantum commu‐ nication and information processing. These two fundamental features of quantum mechanics have made quantum key distribution unconditionally secure (Scarani et al., 2009; Weedbrook et al., 2010) compared to communication based on classical key distribution. Currently, implementation of an optical quantum communication is mainly based on discrete and continuous quantum variables. They are usually generated through nonlinear interaction processes in χ(2) (Kwiat et al., 1995) and χ(3) (Lee et al., 2006,2009) media. Discrete-variable qubit based implementations using polarization (Liang et al., 2006, 2007; Chen et al. 2007, 2008; Sharping et al., 2006) and time-bin (Brendel et al., 1999; Tittel et al., 1998, 1999) entanglement have difficulty to obtain unconditional-ness, and also usually have low optical data-rate because of post-selection technique with low probability of success in a low efficient single photon detector at telecom-band (Liang et al., 2005, 2006, 2007). Continuous-variable imple‐ mentations using quadrature entanglement (Yonezawa et al., 2004; Bowen et al., 2003; Silberhorn et al., 2002) and polarization squeezing (Korolkova et al., 2002) can have high efficiency and high optical data-rate because of available high speed and efficient homodyne detection. However, the quality of quadrature entanglement is very sensitive to loss, which is imperfect for implementing any entanglement based quantum protocols over long distance. Continuous-variable protocols that do not rely on entanglement, for instance, coherent-state based quantum communication (Yuen, 2004; Corndorf et al., 2003; Barbosa et al., 2003; Grosshans et al., 2002, 2003; Qi et al., 2007; Wilde Qi et al., 2008), are perfect for long distance optical communication. Several experimental approaches were taken to resolve transmission loss for long distance optical communication by using coherent light source. Optical wave

© 2013 Lee et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Lee et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

mechanical implementations (Lee et al., 2002, 2004) of entanglement and superposition with coherent fields have been demonstrated.

We discuss and demonstrate a new type of optical communications based on weak coherent light fields in detail in this chapter.

#### **2. Correlation functions of two weak light fields**

Two orthogonal light fields are used to implement correlation function between two distant observers. In the Stapp's approach (Grib et al., 1999; Peres, 1995) for two distant observers *A* and *B*, when analyzer *A* is oriented along the polarization angle *θ1*, the transmitted |*θ*<sup>1</sup> // and reflected |*θ*<sup>1</sup> <sup>⊥</sup> polarization vectors of the light are given by,

$$\left| \theta\_1 \right\rangle\_{\rangle} = \cos \theta\_1 \left| H\_1 \right\rangle + \sin \theta\_1 \left| V\_1 \right\rangle\_{\rangle} \tag{1}$$

Operator *A1(B2)* with eigenvalues of ±1 can be measured by using the balanced detection scheme as shown in Fig. 1. Two detectors are placed at the two output ports of a cube polari‐ zation beam splitter. Their output currents are subtracted from each other. The arrangement of this detection scheme can be used for measuring operator *A1* of Eq.(4) and *B2* of Eq.(7) that is the subtraction between the projection of the transmitted signal D// and the projection of the

Let's consider a beam of photons incidents on the PBS, if one photon goes through the PBS, it will produce non-zero signal at detector D// and zero signal at detector D⊥. Then, the subtraction yields positive signal as of D// − D⊥ ≥ 0. If a photon is reflected from the PBS, it will go to the detector D⊥ and produce non-zero signal at detector D⊥ and zero signal at detector D//. Then, the subtraction yields negative signal as of D// − D<sup>⊥</sup> ≤ 0. For a certain amount of time, the subtraction records the random positive and negative spikes corresponding to the eigenvalues

> q//

records a series of discrete random values, +1 and -1. Then, the mean value of *A1* is zero, that is *A*<sup>1</sup> = 0. Similarly, we can apply the same detection scheme for measuring operator *B2* and obtain *B*2 = 0. The expectation value of the product *A*1*B*<sup>2</sup> or the mean value of the product

HWP < B > = 0

D//

+1


< A > = 0

Optical Communication with Weak Coherent Light Fields

http://dx.doi.org/10.5772/56375

35

t

or

and |*θ*<sup>1</sup> <sup>⊥</sup>, the detection scheme *A*

(8)

q^

PBS at q

**Figure 1.** Detection scheme based on balanced homodyne detection for measuring operators *A1* and *B2*.

1 2 12 1 2 *C AB* (,)

Theoritical prediction for the mean value measurements of *A*1*B*2 are shown in Fig. 2.

µ µ± ± cos2( ).

As shown in Eq.(8) above, there are 4 type of correlation functions analog to four Bell states.

q q

of +1 and -1 of operator *A1*, respectively, as shown in the inset of Fig. 1.

D^

Beam of photons

If the incoming photons are in the superposition of |*θ*<sup>1</sup> //

q q

signals of *A1* and *B2* will produce correlation functions, as given by,

reflected signal D⊥.

$$\left| \left| \theta\_1 \right>\_{\perp} = -\sin \theta\_1 \middle| H\_1 \right> + \cos \theta\_1 \middle| V\_1 \rangle\_{\prime} \tag{2}$$

where the *H* and *V* are the horizontal and vertical axes. Analyzer *A* is a combination of half wave plate (HWP) and a polarization beam splitter (PBS) for projecting the linear polarization of the incoming photon. The operator associated with analyzer *A* can be represented by

$$
\hat{A}\_1 = \left| \theta\_1 \right\rangle\_{\rangle} \left\langle \theta\_1 \right| - \left| \theta\_1 \right\rangle\_{\perp} \left\langle \theta\_1 \right|, \tag{3}
$$

$$\hat{A}\_1 = \mathbb{C} \text{os} 2\theta\_1(\left| H\_1 \right> \left| H\_1 \right> - \left| V\_1 \right> \left| V\_1 \right>) + \text{Sim} 2\theta\_1(\left| H\_1 \right> \left| V\_1 \right> + \left| V\_1 \right> \left| H\_1 \right>). \tag{4}$$

The operator *A1* has eigenvalues of ±1, such that,

$$
\left\{\hat{A}\_1 \middle| \theta\_1\right\}\_{\rangle} = \mathbf{1} \middle| \theta\_1\right\rangle\_{\rangle\langle\,'} \tag{5}
$$

$$
\left\langle \hat{A}\_1 \middle| \theta\_1 \right\rangle\_{\perp} = -\mathbf{1} \middle| \theta\_1 \right\rangle\_{\perp}. \tag{6}
$$

Depending on the photon is transmitted or rejected by the analyzer. Similarly, the analyzer *B* oriented at *θ2* can be defined as operator *B2*,

$$
\hat{B}\_2 = \text{Cov} 2\theta\_2 \left( \left| H\_2 \right> \left| H\_2 \right> - \left| V\_2 \right> \left| V\_2 \right> \right) + \text{Sim} 2\theta\_2 \left( \left| H\_2 \right> \left| V\_2 \right> + \left| V\_2 \right> \left| H\_2 \right> \right). \tag{7}
$$

Operator *A1(B2)* with eigenvalues of ±1 can be measured by using the balanced detection scheme as shown in Fig. 1. Two detectors are placed at the two output ports of a cube polari‐ zation beam splitter. Their output currents are subtracted from each other. The arrangement of this detection scheme can be used for measuring operator *A1* of Eq.(4) and *B2* of Eq.(7) that is the subtraction between the projection of the transmitted signal D// and the projection of the reflected signal D⊥.

mechanical implementations (Lee et al., 2002, 2004) of entanglement and superposition with

We discuss and demonstrate a new type of optical communications based on weak coherent

Two orthogonal light fields are used to implement correlation function between two distant observers. In the Stapp's approach (Grib et al., 1999; Peres, 1995) for two distant observers *A* and *B*, when analyzer *A* is oriented along the polarization angle *θ1*, the transmitted |*θ*<sup>1</sup> //

> q

> > q

sin cos , *H V*

where the *H* and *V* are the horizontal and vertical axes. Analyzer *A* is a combination of half wave plate (HWP) and a polarization beam splitter (PBS) for projecting the linear polarization of the incoming photon. The operator associated with analyzer *A* can be represented by

> qqq, ^ = - )

 q( ) ( ) ) (4)

= + cos sin , *H V* (1)

=- + (2)

^ ^ = - ) (6)

1 1 1 11 //

1 1 1 11

11 1 1 1 // *<sup>A</sup>*

*A Cos H H V V Sin H V V H* 1 1 1 1 11 1 11 1 1 = -+ + 2 2.

11 1 // // *<sup>A</sup>* q

11 1 *A* q

*B Cos H H V V Sin H V V H* 2 2 2 2 22 2 22 2 2 = -+ + 2 2.

 q<sup>=</sup> 1 , )

> q1 .

Depending on the photon is transmitted or rejected by the analyzer. Similarly, the analyzer *B*

 q( ) ( ) ) (7)

q

and

(3)

(5)

coherent fields have been demonstrated.

**2. Correlation functions of two weak light fields**

34 Theory and Practice of Cryptography and Network Security Protocols and Technologies

reflected |*θ*<sup>1</sup> <sup>⊥</sup> polarization vectors of the light are given by,

qq

qq

^

q

oriented at *θ2* can be defined as operator *B2*,

q

The operator *A1* has eigenvalues of ±1, such that,

light fields in detail in this chapter.

Let's consider a beam of photons incidents on the PBS, if one photon goes through the PBS, it will produce non-zero signal at detector D// and zero signal at detector D⊥. Then, the subtraction yields positive signal as of D// − D⊥ ≥ 0. If a photon is reflected from the PBS, it will go to the detector D⊥ and produce non-zero signal at detector D⊥ and zero signal at detector D//. Then, the subtraction yields negative signal as of D// − D<sup>⊥</sup> ≤ 0. For a certain amount of time, the subtraction records the random positive and negative spikes corresponding to the eigenvalues of +1 and -1 of operator *A1*, respectively, as shown in the inset of Fig. 1.

**Figure 1.** Detection scheme based on balanced homodyne detection for measuring operators *A1* and *B2*.

If the incoming photons are in the superposition of |*θ*<sup>1</sup> // and |*θ*<sup>1</sup> <sup>⊥</sup>, the detection scheme *A* records a series of discrete random values, +1 and -1. Then, the mean value of *A1* is zero, that is *A*<sup>1</sup> = 0. Similarly, we can apply the same detection scheme for measuring operator *B2* and obtain *B*2 = 0. The expectation value of the product *A*1*B*<sup>2</sup> or the mean value of the product signals of *A1* and *B2* will produce correlation functions, as given by,

$$
\mathbb{C}(\theta\_1, \theta\_2) \propto \left\langle A\_1 \mathbb{B}\_2 \right\rangle \propto \pm \cos 2(\theta\_1 \pm \theta\_2). \tag{8}
$$

As shown in Eq.(8) above, there are 4 type of correlation functions analog to four Bell states. Theoritical prediction for the mean value measurements of *A*1*B*2 are shown in Fig. 2.

The balanced detector has two input ports. The signal field and local oscillator field optically mixed at the beam splitter. The local oscillator field is a large amplitude lightwave with the same frequency as the signal and having a well-defined phase with respect to the signal field. Generally, local oscillator field can be obtained from the same laser source as the signal field. The emerging output fields *ε*1 and *ε*<sup>2</sup> are the superposition of signal and local oscillator fields.

<sup>1</sup> ( ), (a)

*LO s*

*LO s*

<sup>1</sup> ( ). (b)

where *εLO* and *εs* are the amplitude of the signal and local oscillator field respectively.

2 \* 1 1 11 *I* = = 

, (10)

Optical Communication with Weak Coherent Light Fields

http://dx.doi.org/10.5772/56375

. (11)

(12)

+ +*t* (15)

*s s* = *A Cos t* (13)

*LO LO* = + *A Cos t* (14)

 

2 \* 2 2 22 *I* = = 

 

Since the signal and local oscillator fields are derived from the same laser source with relative phase *φ*. By considering only the real part of the signal and local oscillator fields, it can be

( ),

( ).

 wj

Where *Aεs* and *AεLO* are the amplitude for signal and local oscillator fields, *ω* is optical frequency, *φ* is relative phase between the fields. Hence the output of the balanced homodyne

1 2 {cos( ) cos(2 )}. *s LO II AA* - =

 j  wj

1 2 2 . *s LO I I* - = 

w

  (9)

37

The output fields *ε*1 and *ε*2 are given as,

described as,

detector is given by,

1

2

 

= +

 

= -

2

Photocurrents that produced by the output fields *ε*1 and *ε*2 are given as

Hence, the output of the balanced homodyne detector will be given as,

 

2

**Figure 2.** Theoritical prediction of correlation functions (a) –cos 2(θ*1*− θ*2)*, (b) −cos 2(θ*1*+ θ*2),* (c) cos 2(θ*<sup>1</sup>* - θ*2)*, (d) cos 2(θ*<sup>1</sup>* + θ*2)*.

#### **3. Balanced homodyne detector**

Balanced homodyne detector is utilised as the detection scheme for the weak coherent light fields for optical communication.

It consists of a 50/50 beam splitter, two photo detectors, a local oscillator field and a transimpendance amplifier. Superposed local oscillator field and weak light field will be detected by photodiodes D1 and D2, lead to the generation of the photocurrent *I1* and*I2*. The photodiodes are connected together in such a way that the output equal to the *I1* minus*I2* as shown in Fig. 3.

**Figure 3.** Balanced Homodyne detection.

The balanced detector has two input ports. The signal field and local oscillator field optically mixed at the beam splitter. The local oscillator field is a large amplitude lightwave with the same frequency as the signal and having a well-defined phase with respect to the signal field. Generally, local oscillator field can be obtained from the same laser source as the signal field. The emerging output fields *ε*1 and *ε*<sup>2</sup> are the superposition of signal and local oscillator fields. The output fields *ε*1 and *ε*2 are given as,

$$\begin{aligned} \varepsilon\_1 &= \frac{1}{\sqrt{2}} (\varepsilon\_{LO} + \varepsilon\_s), \quad &\text{(a)}\\ \varepsilon\_2 &= \frac{1}{\sqrt{2}} (\varepsilon\_{LO} - \varepsilon\_s). \quad &\text{(b)} \end{aligned} \tag{9}$$

where *εLO* and *εs* are the amplitude of the signal and local oscillator field respectively. Photocurrents that produced by the output fields *ε*1 and *ε*2 are given as

$$I\_1 = \left| \varepsilon\_1 \right|^2 = \varepsilon\_1 \varepsilon\_1^\* \tag{10}$$

$$I\_2 = \left| \varepsilon\_2 \right|^2 = \varepsilon\_2 \varepsilon\_2^\*. \tag{11}$$

Hence, the output of the balanced homodyne detector will be given as,

**Figure 2.** Theoritical prediction of correlation functions (a) –cos 2(θ*1*− θ*2)*, (b) −cos 2(θ*1*+ θ*2),* (c) cos 2(θ*<sup>1</sup>* - θ*2)*, (d) cos 2(θ*<sup>1</sup>*

Balanced homodyne detector is utilised as the detection scheme for the weak coherent light

It consists of a 50/50 beam splitter, two photo detectors, a local oscillator field and a transimpendance amplifier. Superposed local oscillator field and weak light field will be detected by photodiodes D1 and D2, lead to the generation of the photocurrent *I1* and*I2*. The photodiodes are connected together in such a way that the output equal to the *I1* minus*I2*

**1** *I*

**Transimpedance amplifier** 

**<sup>21</sup>** = - *III*

*D* **1**

**2** *I*

**2** 

> *LO*

**1** 

*D* **<sup>2</sup>**

36 Theory and Practice of Cryptography and Network Security Protocols and Technologies

**Local Oscillator Field**

**50/50 BS**

**Signal**

**Figure 3.** Balanced Homodyne detection.

*S* 

+ θ*2)*.

**3. Balanced homodyne detector**

fields for optical communication.

as shown in Fig. 3.

$$I\_1 - I\_2 = 2\varepsilon\_s \varepsilon\_{LO}.\tag{12}$$

Since the signal and local oscillator fields are derived from the same laser source with relative phase *φ*. By considering only the real part of the signal and local oscillator fields, it can be described as,

$$
\varepsilon\_s = A \varepsilon\_s \text{Cost}(out),
\tag{13}
$$

$$
\varepsilon\_{LO} = A \varepsilon\_{LO} \text{Co(}ot + \varphi\text{)}.\tag{14}
$$

Where *Aεs* and *AεLO* are the amplitude for signal and local oscillator fields, *ω* is optical frequency, *φ* is relative phase between the fields. Hence the output of the balanced homodyne detector is given by,

$$I\_1 - I\_2 = \left| A\varepsilon\_s \right| \left| A\varepsilon\_{LO} \right| \left| \cos(\varphi) + \cos(2at + \varphi) \right|. \tag{15}$$

The second term in the Eq.(15) is the fast varying term beyond the detection of the of the photo detector. Therefore, the output of the balanced homodyne detector is phase dependence, which is given by,

$$I\_1 - I\_2 \propto \left| A\varepsilon\_s \right| \left| A\varepsilon\_{LO} \right| \cos(\phi). \tag{16}$$

the output port 1 of the beam splitter is a superposition of the vertically and horizontally polarized weak light fields, similarly for beam 2 from output port 2 of the beam splitter. The balanced homodyne detectors are made of two p-i-n photodiodes (EXT500) and the signal measured by the balanced homodyne detectors will be further amplified by a transimpedance amplifier. A quarter wave plate at 45° as part of measuring device is inserted at beams 1 and 2 to transform the linearly polarized states to circularly polarized states. By using a quarter wave plate transformation matrix, the field amplitudes *V1, H1, V2* and *H2* are transformed as,

> ˆ ˆ , (a) ˆ ˆ , (b) ˆ ˆ , (c) ˆ ˆ , (d)

> > **D2**^

**D1**^

**HWP at** q**<sup>1</sup>**

**Beam 2 PBS2**

**HWP at** q**<sup>2</sup>**

**D1//**

**D2//**

**A**

**B**

**X**

Optical Communication with Weak Coherent Light Fields

http://dx.doi.org/10.5772/56375

(18)

(19)

**PBS1**

(17)

39

1 11 1 11 2 22 2 22

**Beam 1**

**Analyzer** *A*

**QWP at 45o**

> **QWP at 45o**

> > **Analyzer** *B*

**Figure 5.** Experimental setup for demonstration of the optical communication with weak coherent light fields.

1 1 1 11 ˆ ˆ *e HV* <sup>ˆ</sup> ® + cos sin , q

2 2 2 22 ˆ ˆ *e HV* <sup>ˆ</sup> ® + cos sin . q

 q

> q

For simplicity we use unit vector notation and drop the amplitude of field notation. Now, analyzer *A* in beam 1 will experience homogeneous superposition of left circularly polarized and right circularly polarized weak light fields. Similarly for analyzer *B* in beam 2. Analyzer *A*(*B*) is placed before the balanced homodynes detector *A*(*B*) to project out the phase angle 1(2)

*V iH V H H iV V iH V H H iV*

where the phase shift due to the beam splitter is included.

**|V**

**V1**

**H1**

> **V2**

**H2**

**50/50 BS**

**Mirror**

**H**

as,

®- + ® - ®- + ® -

One of the main features of the balanced homodyne detector is the high signal to noise ratio compared to a single detector. For example, classical intensity fluctuations of the laser would affect the measurement of a single detector. Contrary, any changes in intensity will be canceled by the subtraction of the photocurrent with an ideal balanced homodyne detector.

However, due to the Poissonian statistics of the coherent light and random splitting process in the 50/50 beam splitter, fluctuations in intensity cannot be completely removed. Therefore even with the presence of only local oscillator field, the balanced homodyne detector will have a shot noise level above the electronics noise level as depicted in Fig.4, limiting the signal to noise ratio.

**Figure 4.** Frequency spectrum of balanced homodyne detector. The red line is the electronics noise of the BHD with‐ out any light while the blue line is the shot noise level of the BHD with the presence of the local oscillator field.

#### **4. Practical demonstration of the optical communication with two weak light fields**

A proof-of-principle experiment to demonstrate the correlations of two weak light fields as described in section 2 is shown in Fig.5. A continuous wave laser at telecom band wavelength (1534nm) is used to provide two orthogonal weak light fields. We use a 50/50 beam splitter to optically mix the vertically and horizontally polarized coherent light fields. The beam 1 from the output port 1 of the beam splitter is a superposition of the vertically and horizontally polarized weak light fields, similarly for beam 2 from output port 2 of the beam splitter. The balanced homodyne detectors are made of two p-i-n photodiodes (EXT500) and the signal measured by the balanced homodyne detectors will be further amplified by a transimpedance amplifier. A quarter wave plate at 45° as part of measuring device is inserted at beams 1 and 2 to transform the linearly polarized states to circularly polarized states. By using a quarter wave plate transformation matrix, the field amplitudes *V1, H1, V2* and *H2* are transformed as,

$$\begin{aligned} V\_1 &\rightarrow -i\hat{H}\_1 + \hat{V}\_{1'} & \text{(a)}\\ H\_1 &\rightarrow \hat{H}\_1 - i\hat{V}\_{1'} & \text{(b)}\\ V\_2 &\rightarrow -i\hat{H}\_2 + \hat{V}\_{2'} & \text{(c)}\\ H\_2 &\rightarrow \hat{H}\_2 - i\hat{V}\_{2'} & \text{(d)} \end{aligned} \tag{17}$$

where the phase shift due to the beam splitter is included.

The second term in the Eq.(15) is the fast varying term beyond the detection of the of the photo detector. Therefore, the output of the balanced homodyne detector is phase dependence, which

One of the main features of the balanced homodyne detector is the high signal to noise ratio compared to a single detector. For example, classical intensity fluctuations of the laser would affect the measurement of a single detector. Contrary, any changes in intensity will be canceled

However, due to the Poissonian statistics of the coherent light and random splitting process in the 50/50 beam splitter, fluctuations in intensity cannot be completely removed. Therefore even with the presence of only local oscillator field, the balanced homodyne detector will have a shot noise level above the electronics noise level as depicted in Fig.4, limiting the signal to

**0 1 2 3 4 5**

*0 1 2 3 4 5* 

*Frequency (MHz)*

**Figure 4.** Frequency spectrum of balanced homodyne detector. The red line is the electronics noise of the BHD with‐ out any light while the blue line is the shot noise level of the BHD with the presence of the local oscillator field.

**4. Practical demonstration of the optical communication with two weak**

A proof-of-principle experiment to demonstrate the correlations of two weak light fields as described in section 2 is shown in Fig.5. A continuous wave laser at telecom band wavelength (1534nm) is used to provide two orthogonal weak light fields. We use a 50/50 beam splitter to optically mix the vertically and horizontally polarized coherent light fields. The beam 1 from

**Shot noise level**

*Shot noise level*

**Frequency (MHz)**

 j

**Electronic noise level**

*Electronic noise level*

(16)

1 2 cos( ). *s LO II AA* - µ 

38 Theory and Practice of Cryptography and Network Security Protocols and Technologies

by the subtraction of the photocurrent with an ideal balanced homodyne detector.

is given by,

noise ratio.

**light fields**

**-90 -80 -70 -60 -50**

**dBm**

*-50*  -60 -70 -80 -90

*dBm*

**Figure 5.** Experimental setup for demonstration of the optical communication with weak coherent light fields.

For simplicity we use unit vector notation and drop the amplitude of field notation. Now, analyzer *A* in beam 1 will experience homogeneous superposition of left circularly polarized and right circularly polarized weak light fields. Similarly for analyzer *B* in beam 2. Analyzer *A*(*B*) is placed before the balanced homodynes detector *A*(*B*) to project out the phase angle 1(2) as,

$$
\hat{e}\_1 \rightarrow \cos \theta\_1 \hat{H}\_1 + \sin \theta\_1 \hat{V}\_{1\prime} \tag{18}
$$

$$
\hat{e}\_2 \to \cos \theta\_2 \hat{H}\_2 + \sin \theta\_2 \hat{V}\_2. \tag{19}
$$

The superposed field in beam 1 after the λ/4 wave plate and the analyzer can be expressed as,

$$\begin{aligned} E\_1(t) &= \left[ (\hat{H}\_1 - i\hat{V}\_1)e^{-i(wt+\phi)} \right] + (i\hat{H}\_1 - \hat{V}\_1)e^{-iwt} \right] \cdot \hat{e}\_1 \\ &= (-\cos\theta\_1 + i\sin\theta\_1)e^{-i(wt-i\phi)} \\ &+ (-i\cos\theta\_1 + \sin\theta\_1)e^{-iwt} .\end{aligned} \tag{20}$$

and the balanced detector *B* measures

j

q

rewritten as,

multiplication signal,

2 2// 2 2

11 1 *A* ( ) 2{cos(2 )sin( ) sin(2 )cos( )},

*A Cos H H V V Sin H V V H* 1 1 1 1 11 1 11 1 1 = -- + 2 2.

The factor of 2 in Eq.(26) is due to the 3 dB gain obtained by balanced detection scheme. Note that the unit polarization projectors (| *H*<sup>1</sup> *H*<sup>1</sup> | − |*V*<sup>1</sup> *V*<sup>1</sup> |) and (| *H*<sup>1</sup> *V*<sup>1</sup> | + |*V*<sup>1</sup> *H*<sup>1</sup> |) in Eq.(27) can be interpreted by in-phase and out-of-phase components of the light field. Similarly

The interference signals in detectors *A* and *B* are then multiplied to obtain the anti-correlated

12 12 sin(2 )sin(2 ) cos(2( )) cos(2( )).

Then, the mean value of this multiplied signal is measured. We obtain one of the correlation

where the second term in Eq.(26) is averaging to zero due to the slow varying relative phase *φ* of the two orthogonal weak light fields from 0 to 2π. We normalized the correlation function *C*(*θ*1, *θ*2) with its maximum obtainable value that is, *θ*<sup>1</sup> =*θ*2. Thus, for the setting of the

 qj

> q q

qqj

µ- - - + + (28)

(29)

qj

1 2 12 1 2 *AB C* ´ µ µ- - ( , ) cos2( ), qq

´ µ- + +

12 1 2

qq

 q

 q( ) ( ) ) (27)

 j= + (26)

 qj

which is identical in structure with operator *A1* as in Eq.(4), that is

for the interference signals obtained in balanced detector B.

*A B*

functions *C*(*θ*1, *θ*2) as described in section 2,

2sin(2 ). *B DD* j

q j

^ = -

The interference signals of Eq.(23) and Eq.(25) above for balanced detectors *A* and *B* are the measurements of operators *A1* and *B2*, respectively. The interference signal in detector *A* is anti-correlated to detector *B* because of the phase shift of the beam splitter. The interference signals contain information of the projection angles of the analyzers. The average of the interference signals is zero, that is, <*A1*> = 0 and <*B2*> = 0. To further discuss the significant of measuring the operator *A1*, the interference signals obtained in balanced detector *A* can be

=- + (25)

Optical Communication with Weak Coherent Light Fields

http://dx.doi.org/10.5772/56375

41

( )

and similarly for the superposed field in beam 2,

$$\begin{aligned} E\_2(t) &= \left[ (\hat{H}\_2 - i\hat{V}\_2)e^{-i(wt+\phi)} \right] + (i\hat{H}\_2 - \hat{V}\_2)e^{-iwt} \right] \cdot \hat{e}\_2 \\ &= (-\cos\theta\_2 + i\sin\theta\_2)e^{-i(wt+\phi)} \\ &+ (-i\cos\theta\_2 + \sin\theta\_2)e^{-iwt} \end{aligned} \tag{21}$$

where ω is optical frequency, and *φ* is the relative phase of the two orthogonal weak light fields. Thus, the interference signals obtained by the photodetector *D*1// in balanced homodyne detectors at beam 1 are given as,

$$\begin{aligned} D\_{1/\cdot}(\wp) &= -ie^{-i(2\theta\_1 + \wp)} + c.c \\ \propto \sin(2\theta\_1 + \wp). &\quad &\quad \text{(a)} \\ D\_{1\perp}(\wp) &= ie^{-i(2\theta\_1 + \pi + \wp)} + c.c \\ \propto -\sin(2\theta\_1 + \wp). &\quad \text{(b)} \end{aligned} \tag{22}$$

On the other hand, for photodetector *D*2// , the reflected beat signal becomes 22b

Then, the balanced detector *A* measures

$$\begin{aligned} A\_1(\varphi) &= D\_{1\parallel} - D\_{1\perp} \\ \varphi &= 2\sin(2\theta\_1 + \varphi). \end{aligned} \tag{23}$$

Similarly, the interference signals obtained by the photodetectors in balanced homodyne detector at beam 2 can be written as,

2 2 (2 ) 2// 2 (2 ) 2 2 ( ) . sin(2 ), (a) ( ) . sin(2 ), (b) *i i D ie c c D ie c c* q pj q pj j q j j q j - ++ - ++ ^ = + µ- + =- + µ + (24)

and the balanced detector *B* measures

The superposed field in beam 1 after the λ/4 wave plate and the analyzer can be expressed as,

*i wt iwt*

*i wt iwt*


(20)

(21)

(22)

(24)


( ) 1 11 11 1 ( )

*E t H iV e iH V e e*


( ) 2 22 22 2 ( )

*E t H iV e iH V e e*

*i wt iwt*

1

q j


=- +

(2 )

sin(2 ). (a)

sin(2 ). (b)

1 1// 1 1

^ = -

Similarly, the interference signals obtained by the photodetectors in balanced homodyne

2sin(2 ). *A DD* j

2

= +

( ) .

q pj


*i*

*D ie c c*

2

q j

2

q j

(2 )

2

q pj

=- +


( ) .

*i*

*D ie c c*

(2 )

sin(2 ), (b)

sin(2 ), (a)

q j

( )

, the reflected beat signal becomes 22b

= + (23)

1

= +

( ) .

q pj


*i*

*D ie c c*

( ) .

*D ie c c*

*i*

(2 )


ˆ ˆ ˆˆ ( ) [( ) ] ( ) ] ˆ

j

=- +- ×

j

where ω is optical frequency, and *φ* is the relative phase of the two orthogonal weak light fields. Thus, the interference signals obtained by the photodetector *D*1// in balanced homodyne

ˆ ˆ ˆˆ ( ) [( ) ] ( ) ] ˆ

j

=- +- ×

*i wt i iwt*

j

1 1 1 1

2 2 2 2

*i e*

 q

 q

( cos sin ) ( cos sin ) ,

q

=- + +- +

detectors at beam 1 are given as,

q

1//

j

µ +

j

µ- +

1

On the other hand, for photodetector *D*2//

Then, the balanced detector *A* measures

detector at beam 2 can be written as,

^

2//

j

µ- +

j

µ +

2

^

1

q j

1

q j

*i e*

*i e*

 q

 q

( cos sin ) ( cos sin ) .

q

=- + +- +

and similarly for the superposed field in beam 2,

q

*i e*

40 Theory and Practice of Cryptography and Network Security Protocols and Technologies

$$\begin{aligned} B\_2(\varphi) &= D\_{2\angle \zeta} - D\_{2\bot} \\ &= -2\sin(2\theta\_2 + \varphi). \end{aligned} \tag{25}$$

The interference signals of Eq.(23) and Eq.(25) above for balanced detectors *A* and *B* are the measurements of operators *A1* and *B2*, respectively. The interference signal in detector *A* is anti-correlated to detector *B* because of the phase shift of the beam splitter. The interference signals contain information of the projection angles of the analyzers. The average of the interference signals is zero, that is, <*A1*> = 0 and <*B2*> = 0. To further discuss the significant of measuring the operator *A1*, the interference signals obtained in balanced detector *A* can be rewritten as,

$$A\_1(\varphi) = 2\langle \cos(2\theta\_1)\sin(\varphi) + \sin(2\theta\_1)\cos(\varphi)\rangle\_\prime \tag{26}$$

which is identical in structure with operator *A1* as in Eq.(4), that is

$$\hat{A}\_1 = \text{Cov} \mathcal{Q} \theta\_1 \Big( \left| H\_1 \right> \Big| H\_1 \Big| - \left| V\_1 \right> \Big| V\_1 \Big) \Big| - \text{Sim} \mathcal{Q} \theta\_1 \Big( \left| H\_1 \right> \Big| V\_1 \Big| + \left| V\_1 \right> \Big| H\_1 \Big| \Big). \tag{27}$$

The factor of 2 in Eq.(26) is due to the 3 dB gain obtained by balanced detection scheme. Note that the unit polarization projectors (| *H*<sup>1</sup> *H*<sup>1</sup> | − |*V*<sup>1</sup> *V*<sup>1</sup> |) and (| *H*<sup>1</sup> *V*<sup>1</sup> | + |*V*<sup>1</sup> *H*<sup>1</sup> |) in Eq.(27) can be interpreted by in-phase and out-of-phase components of the light field. Similarly for the interference signals obtained in balanced detector B.

The interference signals in detectors *A* and *B* are then multiplied to obtain the anti-correlated multiplication signal,

$$\begin{aligned} A\_1 \times B\_2 &\propto -\sin(2\theta\_1 + \varphi)\sin(2\theta\_2 + \varphi) \\ \propto -\cos(2(\theta\_1 - \theta\_2)) &-\cos(2(\theta\_1 + \theta\_2 + \varphi)). \end{aligned} \tag{28}$$

Then, the mean value of this multiplied signal is measured. We obtain one of the correlation functions *C*(*θ*1, *θ*2) as described in section 2,

$$\overline{A\_1 \times B\_2} \propto \mathbb{C}(\theta\_1, \theta\_2) \propto -\cos 2(\theta\_1 - \theta\_2),\tag{29}$$

where the second term in Eq.(26) is averaging to zero due to the slow varying relative phase *φ* of the two orthogonal weak light fields from 0 to 2π. We normalized the correlation function *C*(*θ*1, *θ*2) with its maximum obtainable value that is, *θ*<sup>1</sup> =*θ*2. Thus, for the setting of the analyzers at *θ*<sup>1</sup> =*θ*2, the normalized correlation function *C*(*θ*1, *θ*2)= −1 shows that the two beams are anti-correlated. To generate other correlation functions, such as *C*(*θ*1, *θ*2)∝ −cos2(*θ*<sup>1</sup> + *θ*2) the λ/4 wave plate at beam 2 is rotated at -45°, then the beat signal measured by balanced homodyne detector *B2* of Eq.(25) is given by

$$B\_2(\varphi) \propto D\_{2/\parallel}(\varphi) - D\_{2\perp}(\varphi) \propto -2\sin(2\theta\_2 - \varphi). \tag{30}$$

Hence, obtaining the correlation function,

$$\mathbb{C}(\theta\_1, \theta\_2) \propto -\cos 2(\theta\_1 + \theta\_2). \tag{31}$$

To verify the analysis discussed in section 2, we perform systematic studies of the proposed experiment. We use a piezoelectric transducer (PZT) to modulate the phase of a weak light field. Then, all 4 types of correlation function were obtained by manipulation of experiment setup as discussed in previous section. We normalized the correlation function *-cos 2(θ<sup>1</sup>*

Fig. 6. (a) The beat signal at balanced detector *A* (b) the beat signal at balanced detector *B* (c)

**Figure 6.** (a) The beat signal at balanced detector *A* (b) the beat signal at balanced detector *B* (c) the multiplied beat

**Time/milisecond**

0 4 8 12 16 20

0 4 8 12 16 20

0 4 8 12 16 20

The blue line is the predicted theoretical value while the red circle with the error bar is the

For each data point, we take ten measurements of the multiplied signal and obtain the average mean value. Each measurement was obtained by fix the projection angle of the analyzer A and rotates the projection angle of analyzer B. The error bar is mainly due to the electronic noises

After we established one of the bipartite correlation functions between observer A and B, bit generation and measurement for optical communications can be done by implementing bit

Lock-in-amplifier is used to measure the bit correlation of between observer *A* and *B*. Fig.8 depicts the experimental setup for bit measurement for observer *A* and *B*. To perform this

electric transducer (PZT) at one of the weak light field to obtain one period of interference signal. An example of single period of interference signal measured at observer and reference signal for the lock-in amplifier is shown in Fig.9. For practical optical communication, phase

measurement for the established correlation function of *–cos 2(θ<sup>1</sup> − θ<sup>2</sup>*

locking of the two orthogonal weak light fields are required.

*=θ<sup>2</sup>*

*)* as a function of the relative projection angle of the analyzer *A* and *B*.

with its maximum obtainable value, that is *θ<sup>1</sup>*

As for correlation function **)(2cos),(** *C* **<sup>21</sup>**

the last correlation function **)(2cos),(** *C* **<sup>21</sup>**

modulated light field.

at *A* and *B*, where the phase

 

 **).(2cos),(** *C*

the minus sign of beat signal *B2* of Eq.(30) is changed to positive sign, yielding the desired correlation function. Similarly, with the λ/2 wave plate at beam 2 and the λ/4 wave plate at

> **<sup>21</sup>** .

To verify the above analysis and measurement method for weak light fields, we present an experiment measurement of one stable coherent light field and one random noise phase

One stable coherent field is mixed with one noise field in a beam splitter. The experimental result has been recently published(Lee, 2009). Fig. 6(a) and (b) are the beat signals obtained

modulator. The product of the beat signal at *A* and *B* is shown in Fig. 6(c). The mean-value measurement produces the bipartite correlation *–cos2(θ1–θ2)*, which is still classical correlation. However, it is obvious that the information of *θ1* and *θ2* are protected by classical noise not quantum noise. Classical noise is not completely random compared to

beam 2 rotated at -45°, the beat signal *B2* of Eq.(30) is equal to **)2sin(2**

 **21**

**4.1 Correlation measurement of a stable field and a noise field** 

quantum noise as inherited by coherent state.

1.0 0.5 0.0 -0.5 -1.0

**Voltage/volt** 

1.0 0.5 0.0 -0.5 -1.0

0.0 -0.5 -1.0 -1.5

**(a)** 

**(b)**

**(c)** 

**<sup>21</sup>** a λ/2 plate in beam 2 is inserted, then

**<sup>21</sup>** (31)

Optical Communication with Weak Coherent Light Fields

http://dx.doi.org/10.5772/56375

43

 **<sup>2</sup>** 

*c(t)* is modulated with random noise through an acousto-optic

. Thus, providing

and temperature dependence of polarization optics.

 *± θ<sup>2</sup>*

the multiplied beat signal.

**4.3. Bit generation and measurement**

correlations between them.

function *±cos 2(θ<sup>1</sup>*

signal.

experimental data.

*-θ<sup>2</sup> )*

*)*, we ramp the Piezo‐

. Fig.7 shows the normalized correlation

As for correlation function *C*(*θ*1, *θ*2)∝cos2(*θ*<sup>1</sup> −*θ*2) a λ/2 plate in beam 2 is inserted, then the minus sign of beat signal *B2* of Eq.(30) is changed to positive sign, yielding the desired correlation function. Similarly, with the λ/2 wave plate at beam 2 and the λ/4 wave plate at beam 2 rotated at -45°, the beat signal *B2* of Eq.(30) is equal to 2sin(2*θ*<sup>2</sup> −*φ*). Thus, providing the last correlation function *C*(*θ*1, *θ*2)∝cos2(*θ*<sup>1</sup> + *θ*2).

#### **4.1. Correlation measurement of a stable field and a noise field**

To verify the above analysis and measurement method for weak light fields, we present an experiment measurement of one stable coherent light field and one random noise phase modulated light field.

One stable coherent field is mixed with one noise field in a beam splitter. The experimental result has been recently published(Lee, 2009). Fig. 6(a) and (b) are the beat signals obtained at *A* and *B*, where the phase *ϕc(t)* is modulated with random noise through an acousto-optic modulator. The product of the beat signal at *A* and *B* is shown in Fig. 6(c). The mean-value measurement produces the bipartite correlation *–cos2(θ1–θ2)*, which is still classical correla‐ tion. However, it is obvious that the information of *θ1* and *θ<sup>2</sup>* are protected by classical noise not quantum noise. Classical noise is not completely random compared to quantum noise as inherited by coherent state.

In the next section, two weak coherent light fields |*α* and |*β* are used for generating quantum correlation, where the quantum noise *ϕ (t) = ϕβ - ϕα* provided by mean photon number fluctuation.

#### **4.2. Correlation measurement of two weak light fields**

By using the experiment setup as proposed in Fig.5, we are able to generate 4 types of bipartite correlation, given as

$$\mathbb{C}(\theta\_1, \theta\_2) \propto \pm \cos 2(\theta\_1 \pm \theta\_2). \tag{32}$$

. Thus, providing

**<sup>21</sup>** a λ/2 plate in beam 2 is inserted, then

**<sup>21</sup>** (31)

 **<sup>2</sup>** 

*c(t)* is modulated with random noise through an acousto-optic

 

 **).(2cos),(** *C*

the minus sign of beat signal *B2* of Eq.(30) is changed to positive sign, yielding the desired correlation function. Similarly, with the λ/2 wave plate at beam 2 and the λ/4 wave plate at

> **<sup>21</sup>** .

To verify the above analysis and measurement method for weak light fields, we present an experiment measurement of one stable coherent light field and one random noise phase

One stable coherent field is mixed with one noise field in a beam splitter. The experimental result has been recently published(Lee, 2009). Fig. 6(a) and (b) are the beat signals obtained

modulator. The product of the beat signal at *A* and *B* is shown in Fig. 6(c). The mean-value measurement produces the bipartite correlation *–cos2(θ1–θ2)*, which is still classical

beam 2 rotated at -45°, the beat signal *B2* of Eq.(30) is equal to **)2sin(2**

 **21**

**4.1 Correlation measurement of a stable field and a noise field** 

quantum noise as inherited by coherent state.

As for correlation function **)(2cos),(** *C* **<sup>21</sup>**

the last correlation function **)(2cos),(** *C* **<sup>21</sup>**

modulated light field.

at *A* and *B*, where the phase

Fig. 6. (a) The beat signal at balanced detector *A* (b) the beat signal at balanced detector *B* (c) the multiplied beat signal. **Figure 6.** (a) The beat signal at balanced detector *A* (b) the beat signal at balanced detector *B* (c) the multiplied beat signal.

To verify the analysis discussed in section 2, we perform systematic studies of the proposed experiment. We use a piezoelectric transducer (PZT) to modulate the phase of a weak light field. Then, all 4 types of correlation function were obtained by manipulation of experiment setup as discussed in previous section. We normalized the correlation function *-cos 2(θ<sup>1</sup> -θ<sup>2</sup> )* with its maximum obtainable value, that is *θ<sup>1</sup> =θ<sup>2</sup>* . Fig.7 shows the normalized correlation function *±cos 2(θ<sup>1</sup> ± θ<sup>2</sup> )* as a function of the relative projection angle of the analyzer *A* and *B*. The blue line is the predicted theoretical value while the red circle with the error bar is the experimental data.

For each data point, we take ten measurements of the multiplied signal and obtain the average mean value. Each measurement was obtained by fix the projection angle of the analyzer A and rotates the projection angle of analyzer B. The error bar is mainly due to the electronic noises and temperature dependence of polarization optics.

#### **4.3. Bit generation and measurement**

analyzers at *θ*<sup>1</sup> =*θ*2, the normalized correlation function *C*(*θ*1, *θ*2)= −1 shows that the two beams are anti-correlated. To generate other correlation functions, such as *C*(*θ*1, *θ*2)∝ −cos2(*θ*<sup>1</sup> + *θ*2) the λ/4 wave plate at beam 2 is rotated at -45°, then the beat signal measured by balanced

2 2// 2 <sup>2</sup> *BD D* ( ) ( ) ( ) 2sin(2 ).

1 2 1 2 *C*( , ) cos2( ).

 q q

As for correlation function *C*(*θ*1, *θ*2)∝cos2(*θ*<sup>1</sup> −*θ*2) a λ/2 plate in beam 2 is inserted, then the minus sign of beat signal *B2* of Eq.(30) is changed to positive sign, yielding the desired correlation function. Similarly, with the λ/2 wave plate at beam 2 and the λ/4 wave plate at beam 2 rotated at -45°, the beat signal *B2* of Eq.(30) is equal to 2sin(2*θ*<sup>2</sup> −*φ*). Thus, providing

To verify the above analysis and measurement method for weak light fields, we present an experiment measurement of one stable coherent light field and one random noise phase

One stable coherent field is mixed with one noise field in a beam splitter. The experimental result has been recently published(Lee, 2009). Fig. 6(a) and (b) are the beat signals obtained at *A* and *B*, where the phase *ϕc(t)* is modulated with random noise through an acousto-optic modulator. The product of the beat signal at *A* and *B* is shown in Fig. 6(c). The mean-value measurement produces the bipartite correlation *–cos2(θ1–θ2)*, which is still classical correla‐ tion. However, it is obvious that the information of *θ1* and *θ<sup>2</sup>* are protected by classical noise not quantum noise. Classical noise is not completely random compared to quantum noise as

In the next section, two weak coherent light fields |*α* and |*β* are used for generating quantum correlation, where the quantum noise *ϕ (t) = ϕβ - ϕα* provided by mean photon

By using the experiment setup as proposed in Fig.5, we are able to generate 4 types of bipartite

 q q

µ± ± (32)

1 2 1 2 *C*( , ) cos2( ).

qq

 j

 qjµ - µ- - ^ (30)

µ- + (31)

 j

42 Theory and Practice of Cryptography and Network Security Protocols and Technologies

qq

homodyne detector *B2* of Eq.(25) is given by

Hence, obtaining the correlation function,

j

the last correlation function *C*(*θ*1, *θ*2)∝cos2(*θ*<sup>1</sup> + *θ*2).

**4.2. Correlation measurement of two weak light fields**

modulated light field.

inherited by coherent state.

number fluctuation.

correlation, given as

**4.1. Correlation measurement of a stable field and a noise field**

After we established one of the bipartite correlation functions between observer A and B, bit generation and measurement for optical communications can be done by implementing bit correlations between them.

Lock-in-amplifier is used to measure the bit correlation of between observer *A* and *B*. Fig.8 depicts the experimental setup for bit measurement for observer *A* and *B*. To perform this measurement for the established correlation function of *–cos 2(θ<sup>1</sup> − θ<sup>2</sup> )*, we ramp the Piezo‐ electric transducer (PZT) at one of the weak light field to obtain one period of interference signal. An example of single period of interference signal measured at observer and reference signal for the lock-in amplifier is shown in Fig.9. For practical optical communication, phase locking of the two orthogonal weak light fields are required.

**Figure 7.** Experimental measurement of Bi-partite correlation functions (a)*–cos 2(***θ***<sup>1</sup>* **− θ***<sup>2</sup> ),* (b) **−***cos 2(***θ***<sup>1</sup>*  **+ θ***<sup>2</sup> ),* (c) *cos* **2(θ***<sup>1</sup>*  **− θ***<sup>2</sup> ),* (d) **cos 2(θ***<sup>1</sup>*  **+ θ***<sup>2</sup> )*

**Figure 8.** Experimental setup for demonstration of the bit generation and measurement

We measure quadrature phases of orthogonal weak light fields with the step size of n*π*/2 (n = integer) as shown in Fig. 10(a) (blue line). Using the same lock-in reference phase in the lockin amplifier, we measure the quadrature phases of weak coherent state at detector *B* as shown in Fig. 10(a) (dashed red line). We have observed the bits correlation between two parties for the shared correlation function of *−cos 2(θ<sup>1</sup> − θ<sup>2</sup> )* as shown in Fig. 4(a), where the positive (negative) quadrature signal is encoded as keys/bits '1' ('0'), respectively. By using the same lock-in reference phase, we observe bits correlations for the other three types of correlation

**The LO phases /(degree) The LO phases /(degree)**

**0 20 40 60**

**0 90 180 270 360 0 90 180 270 360**

**Figure 9.** (a) Single period of interference signal measured at observer *A* (red line) compared to b) piezoelectric driv‐

**Time/ms**

**1.5 1 05 0 -0.5 -1**

**1.5 1 05 0 -0.5 -1**

*),* (b) **−***cos 2(***θ***<sup>1</sup>*

**Voltage/(mV)**

*)*, and *cos 2(θ<sup>1</sup> − θ<sup>2</sup>*

In real practice of long distance optical communication, we can establish one of the bit correlations for calibrating the lock-in reference phase at observer *A* and *B*. We further explore the feasibility of the scheme long distance optical communication for by performing bits

*)* as shown in Figs. 10(b), 10(c), and

**0 90 180 270 360**

*)*, (c) *cos 2(***θ***<sup>1</sup>*

 **+ θ***<sup>2</sup>*

*)*, and (d) *cos*

 **+ θ***<sup>2</sup>*

**The LO phases /(degree)**

Optical Communication with Weak Coherent Light Fields

http://dx.doi.org/10.5772/56375

45

functions *−cos 2(θ<sup>1</sup>*

**2(θ***<sup>1</sup>*  **− θ***<sup>2</sup> )*

**1.5 1 05 0 -0.5 -1**

**Voltage/(mV)**

**1.5 1 05 0 -0.5 -1** **'1'**

10(d), respectively.

 *+ θ<sup>2</sup>*

**0.02**

**0.01**

**Voltage/V**

**-0.01**

**0**

a) b)

ing voltage (blue dashed line), which is used as reference phase in the lock-in amplifier.

**Voltage/(mV) Voltage/(mV)**

**'1' '1'**

*)*, *cos 2(θ<sup>1</sup>*

**0 90 180 270 360**

**Figure 10.** Bit correlation of two weak light fields (a) **−***cos 2(***θ***<sup>1</sup>* **− θ***<sup>2</sup>*

**The LO phases /(degree)**

**'0' '0' '0'**

 *+ θ<sup>2</sup>*

**Figure 9.** (a) Single period of interference signal measured at observer *A* (red line) compared to b) piezoelectric driv‐ ing voltage (blue dashed line), which is used as reference phase in the lock-in amplifier.

**Figure 10.** Bit correlation of two weak light fields (a) **−***cos 2(***θ***<sup>1</sup>* **− θ***<sup>2</sup> ),* (b) **−***cos 2(***θ***<sup>1</sup>*  **+ θ***<sup>2</sup> )*, (c) *cos 2(***θ***<sup>1</sup>*  **+ θ***<sup>2</sup> )*, and (d) *cos* **2(θ***<sup>1</sup>*  **− θ***<sup>2</sup> )*

(negative) quadrature signal is encoded as keys/bits '1' ('0'), respectively. By using the same lock-in reference phase, we observe bits correlations for the other three types of correlation functions *−cos 2(θ<sup>1</sup> + θ<sup>2</sup> )*, *cos 2(θ<sup>1</sup> + θ<sup>2</sup> )*, and *cos 2(θ<sup>1</sup> − θ<sup>2</sup> )* as shown in Figs. 10(b), 10(c), and 10(d), respectively.

We measure quadrature phases of orthogonal weak light fields with the step size of n*π*/2 (n = integer) as shown in Fig. 10(a) (blue line). Using the same lock-in reference phase in the lockin amplifier, we measure the quadrature phases of weak coherent state at detector *B* as shown in Fig. 10(a) (dashed red line). We have observed the bits correlation between two parties for

**Figure 7.** Experimental measurement of Bi-partite correlation functions (a)*–cos 2(***θ***<sup>1</sup>* **− θ***<sup>2</sup>*

44 Theory and Practice of Cryptography and Network Security Protocols and Technologies

PBS D1// <sup>1</sup>

PBS2 D2//

B

**Figure 8.** Experimental setup for demonstration of the bit generation and measurement

D2^

A

D1^

**2(θ***<sup>1</sup>*  **− θ***<sup>2</sup>*

*),* (d) **cos 2(θ***<sup>1</sup>*

*H***<sup>1</sup>**

*H***<sup>2</sup>**

 **+ θ***<sup>2</sup> )*

*V***2**

**HWP at** q**<sup>1</sup>**

**HWP at** q**<sup>2</sup>**

*V***1**

*)* as shown in Fig. 4(a), where the positive

**Lock-in-Amplifier**

**Lock-in-Amplifier**

*),* (b) **−***cos 2(***θ***<sup>1</sup>*

**Reference phase** 

 **+ θ***<sup>2</sup>*

*),* (c) *cos*

the shared correlation function of *−cos 2(θ<sup>1</sup> − θ<sup>2</sup>*

In real practice of long distance optical communication, we can establish one of the bit correlations for calibrating the lock-in reference phase at observer *A* and *B*. We further explore the feasibility of the scheme long distance optical communication for by performing bits correlations between two observers over a distance of 10 km through a transmission fiber. We couple one of the orthogonal weak light fields into 10 km of transmission fiber and a quarterwave plate and a half-wave plate are used at the output of the transmission fiber to compensate the birefringence. The correlation between two observers *A* and *B* are found to be preserved over the 10 km transmission fiber (Sua et al., 2011). We managed to establish four types of correlation functions and performed bits correlations for each shared correlation function between two observers.

edges the support from University of Malaya High Impact Research Grant UM.C/HIR/MOHE/

Optical Communication with Weak Coherent Light Fields

http://dx.doi.org/10.5772/56375

47

and Harith B. Ahmad2

1 Department of Physics, Michigan Technological University, Houghton, Michigan, USA

[1] Barbosa, G. A.; Corndorf, E.; Kumar, P. & Yuen, H. P. (2003). Secure Communication

[2] Barry, J. R. & Kahn, J. M. (1992). Carrier synchronization for homodyne and heterodynedetection of optical quadriphase-shift keying, J. Lightwave Technol., Vol.10, pp1939–

[3] Bhattacharya, N.; van Linden van den Heuvell, H. B. & Spreeuw, R. J. C. (2002). Implementation of Quantum Search Algorithm using Classical Fourier Optics, Phys.

[4] Bigourd, D.; Chatel, B.; Schleich, W. P. & Girard, B. (2008). Factorization of Numbers with the Temporal Talbot Effect: Optical Implementation by a Sequence of Shaped

[5] Bowen, W. P.; Schnabel, R.; Lam, P. K.; & Ralph, T. C. (2003). Experimental Investigation of Criteria for Continuous variable entanglement, Phys. Rev. Lett., Vol.90, pp043601 [6] Brendel, J.; Gisin, N.; Tittel, W. & Zbinden, H. (1999). Pulsed energy-time entangled twin-photon source for quantum communication. Phys. Rev. Lett., Vol.82, pp2594 [7] Chen, J.; Lee, K. F.; and Kumar, P. (2007). Deterministic quantum splitter based on timereversed Hong-Ou-Mandel interference, Phys. Rev. A, Vol.76, pp031804(R)

[8] Chen, J.; Altepeter, J. B.; Medic, M.; Lee, K. F.; Gokden, B.; Hadfield, R. H.; Nam, S. W. & Kumar, P. (2008). Demonstration of a Quantum Controlled-NOT Gate in the

[9] Corndorf, E.; Barbosa, G. A.; Liang, C.; Yuen, H. P. & Kumar, P. (2003). High-speed data encryption over 25 km of fiber by two-mode coherent state quantum cryptogra‐

[10] Grib, A.A.; & Rodrigues, W. A. (1999). Nonlocality in Quantum Physics, Springer, ISBN

Telecommunications Band, Phys. Rev. Lett., Vol.100, pp133603

phy, Opt. Letters. Vol.28, pp2040-2042

030646182X, New York, USA

using Mesoscopic coherent states, Phys. Rev. Lett., Vol.90, pp227901

2 Department of Physics, University of Malaya, Kuala Lumpur, Malaysia

Ultrashort Pulses, Phys. Rev. Lett., Vol.100, pp030202

SC/01 on this work.

**Author details**

, Yong Meng Sua1

Rev. Lett., Vol. 88, pp137901

Kim Fook Lee1

**References**

1951

In short, for our proposed weak coherent light fields optical communication scheme, infor‐ mation is encoded onto the superposition of the vertically and horizontally polarized weak light fields; decoding involves detection of the weak light fields by balanced homodyne detector and quadrature phases measurement by lock-in amplifier. For reliable measurement of the encoded signal, both phase and polarization of the weak light field must be stable.

Apparently, stability and accurate control of phase and polarization turned out to be the main challenge for the practical implementation of weak coherent light fields optical communica‐ tion. The state of polarization of the light wave is not preserved in the typical transmission fiber. Dynamic control of the state of polarization of the light is critical to ensure the reliability the proposed optical communication scheme. Each dynamic polarization controller is bulky and expensive (Noe et al., 1999), severely limits the practicality of our scheme. Phase locking is another challenging obstacle as well. Phase locking is required between the two orthogonal weak light fields that used to implement the bit correlation between two observers. Without the phase locking, quadrature phases measurement performed by lock-in amplifier is mean‐ ingless. Therefore, optical phase-locked loop must be employed for the phase locking of two weak light fields. However, for high data rate optical communication, the delays allowed in the phased-locked loop are so small that phase locking becomes an enormous challenge (Barry et al., 1992; Kazovsky, 1986).

#### **5. Conclusion**

We have experimentally demonstrated a new type of optical communication protocol based on weak coherent light fields. Coherent bipartite quantum correlations of two distant observers are generated and used to implement keys (bits) correlation over a distance of 10 km. Our scheme can be used to provide security as a supplement to the existence decoy-state Bennett-Brassard 1984 protocol and the differential phase-shift quantum key distribution (DPS-QKD) protocol. The realization of intrinsic correlation of weak coherent light fields by using the measurement method is a first step toward linear-optics quantum computing with weak light fields and single-photon source.

#### **Acknowledgements**

K.F.L and Y.M.S would like to acknowledge that this research is supported by start-up fund from Department of Physics, Michigan Technological University. H.B.A gratefully acknowl‐ edges the support from University of Malaya High Impact Research Grant UM.C/HIR/MOHE/ SC/01 on this work.

#### **Author details**

correlations between two observers over a distance of 10 km through a transmission fiber. We couple one of the orthogonal weak light fields into 10 km of transmission fiber and a quarterwave plate and a half-wave plate are used at the output of the transmission fiber to compensate the birefringence. The correlation between two observers *A* and *B* are found to be preserved over the 10 km transmission fiber (Sua et al., 2011). We managed to establish four types of correlation functions and performed bits correlations for each shared correlation function

46 Theory and Practice of Cryptography and Network Security Protocols and Technologies

In short, for our proposed weak coherent light fields optical communication scheme, infor‐ mation is encoded onto the superposition of the vertically and horizontally polarized weak light fields; decoding involves detection of the weak light fields by balanced homodyne detector and quadrature phases measurement by lock-in amplifier. For reliable measurement of the encoded signal, both phase and polarization of the weak light field must be stable.

Apparently, stability and accurate control of phase and polarization turned out to be the main challenge for the practical implementation of weak coherent light fields optical communica‐ tion. The state of polarization of the light wave is not preserved in the typical transmission fiber. Dynamic control of the state of polarization of the light is critical to ensure the reliability the proposed optical communication scheme. Each dynamic polarization controller is bulky and expensive (Noe et al., 1999), severely limits the practicality of our scheme. Phase locking is another challenging obstacle as well. Phase locking is required between the two orthogonal weak light fields that used to implement the bit correlation between two observers. Without the phase locking, quadrature phases measurement performed by lock-in amplifier is mean‐ ingless. Therefore, optical phase-locked loop must be employed for the phase locking of two weak light fields. However, for high data rate optical communication, the delays allowed in the phased-locked loop are so small that phase locking becomes an enormous challenge (Barry

We have experimentally demonstrated a new type of optical communication protocol based on weak coherent light fields. Coherent bipartite quantum correlations of two distant observers are generated and used to implement keys (bits) correlation over a distance of 10 km. Our scheme can be used to provide security as a supplement to the existence decoy-state Bennett-Brassard 1984 protocol and the differential phase-shift quantum key distribution (DPS-QKD) protocol. The realization of intrinsic correlation of weak coherent light fields by using the measurement method is a first step toward linear-optics quantum computing with weak light

K.F.L and Y.M.S would like to acknowledge that this research is supported by start-up fund from Department of Physics, Michigan Technological University. H.B.A gratefully acknowl‐

between two observers.

et al., 1992; Kazovsky, 1986).

fields and single-photon source.

**Acknowledgements**

**5. Conclusion**

Kim Fook Lee1 , Yong Meng Sua1 and Harith B. Ahmad2

1 Department of Physics, Michigan Technological University, Houghton, Michigan, USA

2 Department of Physics, University of Malaya, Kuala Lumpur, Malaysia

#### **References**


[11] Grosshans, F. & Grangier, P. (2002). Continuous variable quantum cryptography using coherent states, Phys. Rev. Lett., Vol.88, pp057902

[25] Peres, A. (1995). Quantum Theory: Concepts and Methods (Fundamental Theories of

Optical Communication with Weak Coherent Light Fields

http://dx.doi.org/10.5772/56375

49

[26] Qi, B.; Huang, L. L.; Qian, L. & Lo, H. K. (2007). Experimental study on the Gaussian-

[27] Scarani, V.; Bechmann-Pasquinucci, H.; Cerf, N. J.; Dusek, M.; Lütkenhaus, N.; & Peev,

[28] Sharping, J. E.; Lee, K. F.; Foster, M. A.; Turner, A. C.; Lipson, M.; Gaeta, A. L.; & Kumar,

[29] Silberhorn, C.; Ralph, T. C.; Lutkenhaus, N. & Leuchs, G. (2002). Continuous variable

[30] Sua, Y. M.; Scanlon, E.; Beaulieu, T.; Bollen, V. & Lee, K. F. (2011). Intrinsic quantum

[31] Tittel, W.; Brendel, J.; Gisin, B.; Herzog, T.; Zbinden, H & Gisin, N. (1998). Experimental

[32] Tittel, W.; Brendel, J.; Zbinden, H & Gisin, N. (1999). Long distance Bell-type tests using

[33] Weedbrook, C.; Lance, A.M.; Bowen, W.P.; Symul, T.; Ralph, T.C.; & Lam, P.K. (2004).

[34] Wilde, M. W.; Brun, T. A.; Dowling, J. P.; & Lee, H. (2008). Coherent communication

[35] Yonezawa, H.; Aoki, T. & Furusawa, A. (2004). Demonstration of a quantum telepor‐

[36] Yuen, H. P. (2004). KCQ: A New Approach to quantum cryptography I. General

tation network for continuous variables, Nature, Vol.431, pp 430-433

energy-time entangled photons, Phys. Rev. A, Vol.59, pp4150.

P. (2006). Generation of correlated photons through parametric scattering in nanoscale

quantum cryptography: Beating the 3 dB loss limit, Phys. Rev. Lett., Vol.89, pp167901

correlations of weak coherent states for quantum communication, Phys. Rev. A, Vol.

demonstration of quantum correlations over more than 10 km, Phys. Rev. A. Vol.57,

modulated coherent state quantum key distribution over standard telecommunication

Physics), Springer, ISBN 0792336321, New York, USA

silicon waveguides, Optics Express, Vol.14, pp12388,

fibers, Phys. Rev. A. Vol.76, pp052323

83, pp030302(R)

Phys. Rev. Lett., Vol.93, pp170504.

with linear optics, Phys. Rev. A, Vol.77, pp022321

Principles and Key generation, quant-ph/0311061 v6

pp3229.

M. (2009). Rev. Mod. Phys. Vol.81, pp1301


[25] Peres, A. (1995). Quantum Theory: Concepts and Methods (Fundamental Theories of Physics), Springer, ISBN 0792336321, New York, USA

[11] Grosshans, F. & Grangier, P. (2002). Continuous variable quantum cryptography using

[12] Grosshans, F.; Assche, G. V.; Wenger, J.; Brouri, R.; Cerf, N. J.; & Grangier, P. (2003). Quantum key distribution using gaussian-modulated coherent states, Nature, Vol.421,

[13] Kazovsky, L. (1986). Balanced phase-locked loops for optical homodyne receivers: performance analysis, design considerations, and laser linewidth requirements, J.

[14] Korolkova, N.; Leuchs, G.; Loudon, R.; Ralph, T.; & Silberhorn, C. (2002). Polarization squeezing and continuous-variable polarization entanglement, Phys. Rev. A, Vol.65,

[15] Kwiat, P. G; Mattle, K.; Weinfurter, H.; Zeilinger, A.; Sergienko, A. V. & Shih, Y. (1995). New High-Intensity Source of Polarization-Entangled Photon Pairs, Phys. Rev. Lett.,

[16] Lee, K. F. & Thomas, J. E. (2002). Experimental Simulation of Two-Particle Quantum

[17] Lee, K. F.; Chen, J.; Liang, C.; Li,X.; Voss, P. L. & Kumar, P. (2006). Observation of high purity entangled photon pairs in telecom band, Optics Letters. Vol.31, pp1905 [18] Lee, K. F.; Kumar, P.; Sharping, J. E.; Foster, M. A.; Gaeta A. L.; Turner A. C.; & Lipson, M. (2008). Telecom-band entanglement generation for chipscale quantum processing,

[19] Lee, K. F. & Thomas, J. E. Entanglement with classical fields, (2004). Phys. Rev. A, Vol.

[20] Lee, K. F. (2009). Observation of bipartite correlations using coherent light for optical

[21] Liang, C.; Lee, K. F.; Voss, P. L.; Corndorf, E.; Gregory S.; Chen, J.; Li,X. & Kumar, P. (2005). Single-Photon Detector for High-Speed Quantum Communication Applications in the Fiber-optic Telecom Band, Free-Space Laser Communications V. Edited by Voelz,

[22] Liang, C.; Lee, K. F.; Chen, J. & Kumar, P. (2006). Distribution of fiber-generated polarization entangled photon-pairs over 100 km of standard fiber in OC-192 WDM environment, postdeadline paper, Optical Fiber Communications Conference and the 2006 National Fiber Optic Engineers Conference, Anaheim Convention Center,

[23] Liang, C.; Lee, K. F.; Medic, M.; Kumar, P. & Nam, S. W. (2007). Characterization of fiber-generated entangled photon pairs with superconducting single-photon detectors,

[24] Noe, R.; Sandel, D.; Yoshida-Dierolf, M.; Hinz, S.; Mirvoda, V.; Schopflin, A.; Glingener, C.; Gottwald, E.; Scheerer, C.; Fischer, G.; Weyrauch, T. & Haase, W. (1999). Polarization mode dispersion compensation at 10, 20, and 40Gb/s with various optical equalizers, J.

David G. Ricklin, Jennifer C. Proceedings of the SPIE, Vol.5893, pp282-287

communication, Optics Letters, Vol.34, pp1099-1101

Entanglement using Classical Fields, Phys. Rev. Lett., Vol, 88, pp097902

coherent states, Phys. Rev. Lett., Vol.88, pp057902

48 Theory and Practice of Cryptography and Network Security Protocols and Technologies

Lightwave Technol. ,Vol.4, pp182–195

pp238

pp052306

Vol.75, pp4337-4341

arXiv:0801.2606 (quant-ph)

69, pp052311

Anaheim, CA

Optics Express, Vol.15, pp1322

Lightwave Technol., Vol.17, pp1602–1616


**Chapter 3**

**Provisional chapter**

**Efficient Computation for Pairing Based Cryptography:**

Cryptographic protocols are divided in two main classes, symmetric systems where keys are secret and asymmetric approaches with public keys. The security of this second category is based on algebraic problems known to be difficult to solve. Historically, in 1976, Diffie-Hellman described a protocol [26] which was one of the first crypto-systems based on the discrete logarithm problem. Later, the introduction of the elliptic curve in cryptography was promoted by V. Miller [55] and N. Koblitz [47] and a large spectrum of crypto-systems appeared. Pairings are bilinear maps which allow to transform an approach on abelian curves, such as elliptic ones, to a problem on finite fields. A first use of such maps concerns cryptanalysis and was proposed in1993 by Menezes Okamoto and Vanstone [53] and in 1994 by G. Frey and H.G. Rück [36] they linked pairings to the discrete logarithmic problem on curves. In 2000, A. Joux [45] had proposed a tripartite Diffie-Hellmann keys exchange using pairing. That was the beginning of a blossoming literature on the subject. In 2003, D. Boneh and M. Franklin broke a challenge given by Shamir[65] in 1984, creating an identity-based encryption scheme [19] based on pairings. The construction of the pairings is based on the algorithm proposed in 1986 by Victor Miller [54, 56]. A consequence of the rich literature on this subject [62] was the creation of a conference

With the birth of this new domain of investigation in cryptography, the problem of implementing these protocols occurs. This point is very relevant to the interest of pairings, the costs and the performances of the implementation make a cryptosystem available. Some good studies on pairings implementation are given by P. Barreto et al [13, 15], we can also refer to some books [29, 37]. We detail later what is a pairing, but at a high level: a pairing is a bilinear map between two groups **G**1,**G**<sup>2</sup> into a third group

*e* : **G**<sup>1</sup> ×**G**<sup>2</sup> −→ **G**<sup>3</sup>

©2012 El Mrabet, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 El Mrabet; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 El Mrabet; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

**Efficient Computation for Pairing Based**

**A State of the Art**

http://dx.doi.org/10.5772/56295

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

devoted to pairing based cryptography, Pairings [60].

**G**<sup>3</sup> all abelian groups and of the same order.

**Cryptography: A State of the Art**

Nadia El Mrabet

Nadia El Mrabet

10.5772/56295

1. Introduction

**Provisional chapter**

### **Efficient Computation for Pairing Based Cryptography: A State of the Art Cryptography: A State of the Art**

**Efficient Computation for Pairing Based**

Nadia El Mrabet

Nadia El Mrabet

10.5772/56295

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/56295

#### 1. Introduction

Cryptographic protocols are divided in two main classes, symmetric systems where keys are secret and asymmetric approaches with public keys. The security of this second category is based on algebraic problems known to be difficult to solve. Historically, in 1976, Diffie-Hellman described a protocol [26] which was one of the first crypto-systems based on the discrete logarithm problem. Later, the introduction of the elliptic curve in cryptography was promoted by V. Miller [55] and N. Koblitz [47] and a large spectrum of crypto-systems appeared. Pairings are bilinear maps which allow to transform an approach on abelian curves, such as elliptic ones, to a problem on finite fields. A first use of such maps concerns cryptanalysis and was proposed in1993 by Menezes Okamoto and Vanstone [53] and in 1994 by G. Frey and H.G. Rück [36] they linked pairings to the discrete logarithmic problem on curves.

In 2000, A. Joux [45] had proposed a tripartite Diffie-Hellmann keys exchange using pairing. That was the beginning of a blossoming literature on the subject. In 2003, D. Boneh and M. Franklin broke a challenge given by Shamir[65] in 1984, creating an identity-based encryption scheme [19] based on pairings. The construction of the pairings is based on the algorithm proposed in 1986 by Victor Miller [54, 56]. A consequence of the rich literature on this subject [62] was the creation of a conference devoted to pairing based cryptography, Pairings [60].

With the birth of this new domain of investigation in cryptography, the problem of implementing these protocols occurs. This point is very relevant to the interest of pairings, the costs and the performances of the implementation make a cryptosystem available. Some good studies on pairings implementation are given by P. Barreto et al [13, 15], we can also refer to some books [29, 37]. We detail later what is a pairing, but at a high level: a pairing is a bilinear map between two groups **G**1,**G**<sup>2</sup> into a third group **G**<sup>3</sup> all abelian groups and of the same order.

$$e: \mathbb{G}\_1 \times \mathbb{G}\_2 \longrightarrow \mathbb{G}\_3$$

Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 El Mrabet; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 El Mrabet; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

©2012 El Mrabet, licensee InTech. This is an open access chapter distributed under the terms of the Creative

The bilinearity is the property that

*<sup>e</sup>*(*<sup>a</sup>* ·*A*,*<sup>b</sup>* ·*B*) = *<sup>e</sup>*(*A*,*B*)*a*·*b*.

10.5772/56295

53

http://dx.doi.org/10.5772/56295

Algorithm 1: Miller(*P*,*Q*,*l*)

Result: *FP*(*Q*) <sup>∈</sup> **<sup>G</sup>**3(<sup>⊂</sup> **<sup>F</sup>**<sup>∗</sup>

4 : *T* ← [2]*T*,5: *f*<sup>1</sup> ←− *f*<sup>1</sup>

6 : *T* ← *T* +*P* ;

for *i* = *n*−1 to 0 do

if *li* = 1 then

*2.1.2. The pairings*

it to the power *pk*−<sup>1</sup>

∑*k*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> *<sup>T</sup>k*−1−*<sup>i</sup>*

1 : *T* ← *P* ; 2 : *f*<sup>1</sup> ← 1 ; 3 : *f*<sup>2</sup> ← 1 ;

end end return *f*<sup>1</sup> *pk* );

7 : *f*<sup>1</sup> ←− *f*<sup>1</sup> ×*h*2(*Q*), *h*2(*x*) is the equation of the line (*PT* );

Definition 2.1. The Weil pairing, denoted *eW* , is defined by:

Definition 2.2. The Tate pairing, denoted *eTate*, is defined by:

Here, the function *fr*,*<sup>P</sup>* is normalized, i.e. (*ur*

Data: *<sup>l</sup>* = (*ln* ...*l*0)(radix 2 representation), *<sup>P</sup>* <sup>∈</sup> **<sup>G</sup>**1(<sup>⊂</sup> *<sup>E</sup>*(**F***p*)) and *<sup>Q</sup>* <sup>∈</sup> **<sup>G</sup>**2(<sup>⊂</sup> *<sup>E</sup>*(**F***pk* ));

*eW* : **G**<sup>1</sup> ×**G**<sup>2</sup> → **G**3,

**G**<sup>1</sup> ×**G**<sup>2</sup> �→ **G**<sup>3</sup>

Theorem 2.3. *For P* ∈ **G**<sup>1</sup> *and p* ∈ **G**<sup>2</sup> *the following properties hold [43]:*

⋄ *fT*,*Q*(*P*) *is a bilinear pairing called the Ate pairing.*

⋄ *for r not dividing L, the Ate pairing is non degenerated.*

*pi* ≡ *kpk*−1*mod*(*r*)

(*P*,*Q*) → (−1)*<sup>r</sup> fr*,*<sup>P</sup>*(*Q*)

(*P*,*Q*) �→ *eTate*(*P*,*Q*) = *fr*,*P*(*Q*).

This pairing is only defined up to a representative of (*Fpk* )*r*. In order to obtain a unique value we raise

*e*ˆ*Tate*(*P*,*Q*) = *fr*,*P*(*Q*)

Let π*<sup>p</sup>* be the Frobenius map over the elliptic curve: π*<sup>p</sup>* : *E* → *E* : (*x*,*y*) → (*xp*,*yp*). We denote the Frobenius trace by *t*. Let *T* = *t* −1, **G**<sup>1</sup> := *E*[*r*]∩Ker(π*<sup>p</sup>* −[1]) and **G**<sup>2</sup> := *E*[*r*]∩Ker(π*<sup>p</sup>* −[*q*])

<sup>⋄</sup> *Let N* <sup>=</sup> *gcd*(*T<sup>k</sup>* <sup>−</sup>1, *pk* <sup>−</sup>1) *and T<sup>k</sup>* <sup>−</sup><sup>1</sup> <sup>=</sup> *NL, then eTate*(*Q*,*P*)*<sup>L</sup>* <sup>=</sup> *fT*,*Q*(*P*)*c*(*pk*−1)/*N, where c* <sup>=</sup>

*<sup>r</sup>* , obtaining an *r*-th root of unity that we call the reduced Tate pairing

*fr*,*Q*(*P*).

*pk*−1 *<sup>r</sup>* .

<sup>0</sup> *fr*,*P*)(*P*∞) = 1 for some **<sup>F</sup>***p*-rational uniformizer at *<sup>P</sup>*∞.

<sup>2</sup> ×*h*1(*Q*), *h*1(*x*) is the equation of the tangent at the point *<sup>T</sup>*;

Efficient Computation for Pairing Based Cryptography: A State of the Art

For efficient realization **G**<sup>1</sup> and **G**<sup>2</sup> are subgroups of an elliptic curve and **G**<sup>3</sup> is a subgroup of a finite field. The size of the group is fixed by security considerations and lays on the fact that the discrete logarithm problem is hard to solve over **G**1,**G**<sup>2</sup> and **G**3. The pairings are mainly computed with the Miller's algorithm. As a pairing evaluation can be enclosed in a smart card, the question of an efficient implementation is very important.

Several publications are dealing with the efficiency of implementation of pairings. Each of them focus on one aspect of the implementation. We want here to bring together each possible optimizations. The outline of the chapter is the following. First in Section 2 we present the necessary background for a pairing implementation. We present the two first pairings the Weil and the Tate pairings, as well as the optimizations of these, the Eta pairing, the Ate pairing, the twisted Ate pairing, which leads to the notion of optimal pairing and pairing lattices. We also give a first analysis of the arithmetic of pairings. In Section 4, we present the mathematical optimizations of pairings. The use of twisted elliptic curves which leads to the denominator elimination, the improvement of a squaring using cyclotomic subgroups. In Section 5, we present the arithmetical optimizations of a pairing implementation. We describe the different options for an efficient multiplication in Section 5.2, 5.3, 5.3.1 and 5.4. We describe as well how an original representation of a finite field can improve a pairing computation in Section 5.5. In Section 5.6, we describe how the choice of the model of elliptic curve and of its coordinates has a consequence on the implementation. Finally, we conclude in Section 6.

#### 2. Background and notation

Let *E* be an elliptic curve over a finite field **F***q*, with *P*<sup>∞</sup> denoting the identity element of the associated group of rational points *<sup>E</sup>*(**F***p*). For a positive integer *<sup>r</sup>*|#*E*(**F***p*) coprime to *<sup>p</sup>*, let **<sup>F</sup>***pk* be the smallest extension field of **F***<sup>p</sup>* which contains the *r*-th roots of unity in **F***p*; the extension degree *k* is called the security multiplier or embedding degree. Let *E*(**F***p*)[*r*] (respectively *E*(**F***pk* )[*r*]) denote the subgroup of *E*(**F***p*) (respectively *E*(**F***pk* )) of all points of order dividing *r*. The two groups **G**<sup>1</sup> and **G**<sup>2</sup> will be subgroups of elliptic curve groups and **G**<sup>3</sup> is a subgroup of the multiplicative group of a finite field.

#### 2.1. The Weil, Tate and Ate pairings

#### *2.1.1. The Miller algorithm*

The Miller algorithm is the most important step for the Weil, Tate and Ate pairings computation. It is constructed like a double and add scheme using the construction of [*r*]*P*. Miller's algorithm is based on the notion of divisors. We only give here the essential elements for the pairing computation.

The Miller algorithm constructs the rational function *fr*,*<sup>P</sup>* associated to the point *P*, where *P* is a generator of **<sup>G</sup>**<sup>1</sup> <sup>⊂</sup> *<sup>E</sup>*(**F***p*); and at the same time, it evaluates *fr*,*P*(*Q*) for a point *<sup>Q</sup>* <sup>∈</sup> **<sup>G</sup>**<sup>2</sup> <sup>⊂</sup> *<sup>E</sup>*(**F***pk* ).

Algorithm 1: Miller(*P*,*Q*,*l*)

2 Theory and Practice of Cryptography and Network Security Protocols and Technologies

consequence on the implementation. Finally, we conclude in Section 6.

*<sup>e</sup>*(*<sup>a</sup>* ·*A*,*<sup>b</sup>* ·*B*) = *<sup>e</sup>*(*A*,*B*)*a*·*b*.

For efficient realization **G**<sup>1</sup> and **G**<sup>2</sup> are subgroups of an elliptic curve and **G**<sup>3</sup> is a subgroup of a finite field. The size of the group is fixed by security considerations and lays on the fact that the discrete logarithm problem is hard to solve over **G**1,**G**<sup>2</sup> and **G**3. The pairings are mainly computed with the Miller's algorithm. As a pairing evaluation can be enclosed in a smart card, the question of an efficient

Several publications are dealing with the efficiency of implementation of pairings. Each of them focus on one aspect of the implementation. We want here to bring together each possible optimizations. The outline of the chapter is the following. First in Section 2 we present the necessary background for a pairing implementation. We present the two first pairings the Weil and the Tate pairings, as well as the optimizations of these, the Eta pairing, the Ate pairing, the twisted Ate pairing, which leads to the notion of optimal pairing and pairing lattices. We also give a first analysis of the arithmetic of pairings. In Section 4, we present the mathematical optimizations of pairings. The use of twisted elliptic curves which leads to the denominator elimination, the improvement of a squaring using cyclotomic subgroups. In Section 5, we present the arithmetical optimizations of a pairing implementation. We describe the different options for an efficient multiplication in Section 5.2, 5.3, 5.3.1 and 5.4. We describe as well how an original representation of a finite field can improve a pairing computation in Section 5.5. In Section 5.6, we describe how the choice of the model of elliptic curve and of its coordinates has a

Let *E* be an elliptic curve over a finite field **F***q*, with *P*<sup>∞</sup> denoting the identity element of the associated group of rational points *<sup>E</sup>*(**F***p*). For a positive integer *<sup>r</sup>*|#*E*(**F***p*) coprime to *<sup>p</sup>*, let **<sup>F</sup>***pk* be the smallest extension field of **F***<sup>p</sup>* which contains the *r*-th roots of unity in **F***p*; the extension degree *k* is called the security multiplier or embedding degree. Let *E*(**F***p*)[*r*] (respectively *E*(**F***pk* )[*r*]) denote the subgroup of *E*(**F***p*) (respectively *E*(**F***pk* )) of all points of order dividing *r*. The two groups **G**<sup>1</sup> and **G**<sup>2</sup> will be subgroups of elliptic curve groups and **G**<sup>3</sup> is a subgroup of the multiplicative group of a finite field.

The Miller algorithm is the most important step for the Weil, Tate and Ate pairings computation. It is constructed like a double and add scheme using the construction of [*r*]*P*. Miller's algorithm is based on

The Miller algorithm constructs the rational function *fr*,*<sup>P</sup>* associated to the point *P*, where *P* is a generator of **<sup>G</sup>**<sup>1</sup> <sup>⊂</sup> *<sup>E</sup>*(**F***p*); and at the same time, it evaluates *fr*,*P*(*Q*) for a point *<sup>Q</sup>* <sup>∈</sup> **<sup>G</sup>**<sup>2</sup> <sup>⊂</sup> *<sup>E</sup>*(**F***pk* ).

the notion of divisors. We only give here the essential elements for the pairing computation.

The bilinearity is the property that

implementation is very important.

2. Background and notation

2.1. The Weil, Tate and Ate pairings

*2.1.1. The Miller algorithm*

Data: *<sup>l</sup>* = (*ln* ...*l*0)(radix 2 representation), *<sup>P</sup>* <sup>∈</sup> **<sup>G</sup>**1(<sup>⊂</sup> *<sup>E</sup>*(**F***p*)) and *<sup>Q</sup>* <sup>∈</sup> **<sup>G</sup>**2(<sup>⊂</sup> *<sup>E</sup>*(**F***pk* )); Result: *FP*(*Q*) <sup>∈</sup> **<sup>G</sup>**3(<sup>⊂</sup> **<sup>F</sup>**<sup>∗</sup> *pk* ); 1 : *T* ← *P* ; 2 : *f*<sup>1</sup> ← 1 ; 3 : *f*<sup>2</sup> ← 1 ; for *i* = *n*−1 to 0 do 4 : *T* ← [2]*T*,5: *f*<sup>1</sup> ←− *f*<sup>1</sup> <sup>2</sup> ×*h*1(*Q*), *h*1(*x*) is the equation of the tangent at the point *<sup>T</sup>*; if *li* = 1 then 6 : *T* ← *T* +*P* ; 7 : *f*<sup>1</sup> ←− *f*<sup>1</sup> ×*h*2(*Q*), *h*2(*x*) is the equation of the line (*PT* ); end end return *f*<sup>1</sup>

#### *2.1.2. The pairings*

Definition 2.1. The Weil pairing, denoted *eW* , is defined by:

$$\begin{array}{c} e\_W: \mathbb{G}\_1 \times \mathbb{G}\_2 \to \mathbb{G}\_3, \\ (P, \mathcal{Q}) \to (-1)^r \frac{f\_{r,P}(\mathcal{Q})}{f\_{r,\mathcal{Q}}(P)}. \end{array}$$

Definition 2.2. The Tate pairing, denoted *eTate*, is defined by:

$$\begin{aligned} \mathbf{G}\_1 \times \mathbf{G}\_2 &\mapsto \mathbf{G}\_3\\ (P, \mathcal{Q}) &\mapsto e\_{\text{Tate}}(P, \mathcal{Q}) = f\_{r, P}(\mathcal{Q}). \end{aligned}$$

Here, the function *fr*,*<sup>P</sup>* is normalized, i.e. (*ur* <sup>0</sup> *fr*,*P*)(*P*∞) = 1 for some **<sup>F</sup>***p*-rational uniformizer at *<sup>P</sup>*∞. This pairing is only defined up to a representative of (*Fpk* )*r*. In order to obtain a unique value we raise it to the power *pk*−<sup>1</sup> *<sup>r</sup>* , obtaining an *r*-th root of unity that we call the reduced Tate pairing

$$
\hat{e}\_{Tate}(P, \mathcal{Q}) = f\_{r, P}(\mathcal{Q})^{\frac{p^k - 1}{r}}.
$$

Let π*<sup>p</sup>* be the Frobenius map over the elliptic curve: π*<sup>p</sup>* : *E* → *E* : (*x*,*y*) → (*xp*,*yp*). We denote the Frobenius trace by *t*. Let *T* = *t* −1, **G**<sup>1</sup> := *E*[*r*]∩Ker(π*<sup>p</sup>* −[1]) and **G**<sup>2</sup> := *E*[*r*]∩Ker(π*<sup>p</sup>* −[*q*])

Theorem 2.3. *For P* ∈ **G**<sup>1</sup> *and p* ∈ **G**<sup>2</sup> *the following properties hold [43]:*


We therefore obtain the reduced Ate pairing *fT*,*Q*(*P*)(*pk*−1)/*<sup>r</sup>* which is a power of the Tate pairing. As the trace *<sup>t</sup>* is in average of size <sup>√</sup>*p*, for *<sup>r</sup>* <sup>∼</sup> *<sup>p</sup>*, the loop length of Miller's algorithm when computing the Ate pairing is obviously going to be two times shorter than the loop length for the Tate pairing.

10.5772/56295

55

http://dx.doi.org/10.5772/56295

<sup>2</sup> + *t* + *s*) then the distortion map

<sup>4</sup> +*t* +1). Then, by setting **G**<sup>1</sup> = **G**<sup>2</sup> = *E*(**F***p*) and

Efficient Computation for Pairing Based Cryptography: A State of the Art

<sup>5</sup> and that *t* satisfies

Algorithm 3: The η pairing algorithm.

*<sup>P</sup>*, *yP* <sup>←</sup> *<sup>y</sup>*<sup>2</sup>

λ ← µ +*xPxQ* +*yP* +*yQ* +*b*; *g* ← λ + µ*t* + (µ +1)*t*

*<sup>Q</sup>* , *yQ* <sup>←</sup> *<sup>y</sup>*1/<sup>2</sup>

Algorithm 4: The η*<sup>G</sup>* pairing algorithm.

can be easily adapted to others pairings.

The group **<sup>G</sup>**<sup>3</sup> is a subgroup of order *<sup>r</sup>* of **<sup>F</sup>**<sup>⋆</sup>

2.4. Analysis of the arithmetic

*P*;

Output: *e*(*P*,*Q*) ∈ **G**3.

for *i* = 1 upto *m* do *xP* ← *x*<sup>2</sup>

µ ← *xP* +*xQ*;

*f* ← *f* · *g*; *xQ* <sup>←</sup> *<sup>x</sup>*1/<sup>2</sup>

Output: *e*(*P*,*Q*) ∈ **G**3.

return *f <sup>p</sup>*2<sup>−</sup>1;

*f* ← 1;

end

*t*

Input : *P* = (*xP*,*yP*) ∈ **G**<sup>1</sup> and *Q* = (*xQ*,*yQ*) ∈ **G**2.

2;

*<sup>Q</sup>* ;

Input : *P* = (*xP*,*yP*) ∈ **G**<sup>1</sup> and *Q* = (*xQ*,*yQ*) ∈ **G**2.

<sup>4</sup> = *<sup>t</sup>* +1, so we can also represent **<sup>F</sup>***p*<sup>4</sup> as **<sup>F</sup>***p*[*t*]/(*<sup>t</sup>*

with *<sup>b</sup>* <sup>∈</sup> **<sup>F</sup>**2. If **<sup>F</sup>***p*<sup>2</sup> = **<sup>F</sup>***p*[*s*]/(*s*<sup>2</sup> + *<sup>s</sup>* + <sup>1</sup>) and **<sup>F</sup>***p*<sup>4</sup> = **<sup>F</sup>***p*<sup>2</sup> [*t*]/(*<sup>t</sup>*

**G**<sup>3</sup> = **F***p*<sup>4</sup> , Algorithm 4 computes an admissible, symmetric pairing.

most efficient implementations are obtained with the Miller's algorithm.

<sup>φ</sup> : *<sup>E</sup>*(**F***p*) <sup>→</sup> *<sup>E</sup>*(**F***p*<sup>4</sup> ) is defined by <sup>φ</sup>(*x*,*y*)=(*<sup>x</sup>* + *<sup>s</sup>*2,*<sup>y</sup>* + *sx* +*t*). Note that *<sup>s</sup>* = *<sup>t</sup>*

Historically, the Weil and Tate pairing was developed by mathematicians without any consideration for cryptography. As efficient implementation of pairings become an interesting question for cryptographers, they searched for improving these two pairings. The Ate and twisted Ate pairing were improvement of the Tate pairing, throught mathematical properties [43]. The notion of Optimal pairing [70] and pairing lattices [42] are the latest properties of pairing. The number of iterations is reduced to the minimum in [70]. In [42], F. Hess proves that every pairing are in relation, because the different pairings are in fact element of a lattice in which each pairing is a power of another pairing. To present the following Sections, we work over the Tate pairing, since as any optimizations of the Tate pairing

In order to present the different existing options for the optimizations of a pairing computation, we will focus on the Miller's algorithm. Among the several algorithms which exist to compute a pairing, the

Let *P* = (*XP*,*YP*) be a point in affine coordinates of the set *E*(**F***p*)[*r*] (or in Jacobian coordinates with *ZP* = 1). We consider the point *p* of order *r* in *E*(**F***pk* ), also given in affine coordinates (*xQ*,*yQ*). Let **G**<sup>1</sup> =< *P* > be the subgroup of order *r* of *E*(**F***p*) generated by the point *P* and **G**<sup>2</sup> =< *Q* > the subgroup of order *<sup>r</sup>* of *<sup>E</sup>*(**F***pk* ). We want to compute a pairing between **<sup>G</sup>**<sup>1</sup> and **<sup>G</sup>**2, under the condition **<sup>G</sup>**<sup>1</sup> �= **<sup>G</sup>**2.

*pk* .

#### 2.2. The Duursma-Lee pairing

Duursma and Lee use a family of hyperelliptic curves including supersingular curves over finite fields of characteristic three and adapt it to pairing.

For **F***<sup>p</sup>* with *p* = 3*<sup>m</sup>* and *k* = 6, suitable curves are defined by an equation of the form

$$E: \mathbf{y}^2 = x^3 - \mathbf{x} + b,$$

with *<sup>b</sup>* = <sup>±</sup><sup>1</sup> <sup>∈</sup> **<sup>F</sup>**3. If **<sup>F</sup>***p*<sup>3</sup> = **<sup>F</sup>***p*[ρ]/(ρ<sup>3</sup> <sup>−</sup> <sup>ρ</sup> <sup>−</sup> *<sup>b</sup>*), and **<sup>F</sup>***p*<sup>6</sup> = **<sup>F</sup>***p*<sup>3</sup> [σ]/(σ<sup>2</sup> + <sup>1</sup>) then the distortion map <sup>φ</sup> : *<sup>E</sup>*(**F***p*) <sup>→</sup> *<sup>E</sup>*(**F***p*<sup>6</sup> ) is defined by <sup>φ</sup>(*x*,*y*)=(<sup>ρ</sup> <sup>−</sup>*x*,σ*y*). Then, setting **<sup>G</sup>**<sup>1</sup> = **<sup>G</sup>**<sup>2</sup> = *<sup>E</sup>*(**F**3*<sup>m</sup>* ) and **G**<sup>3</sup> = **F***p*<sup>6</sup> , Algorithm 2 computes an admissible, symmetric pairing.

Algorithm 2: The Duursma-Lee pairing algorithm.

```
Input : P = (xP,yP) ∈ G1 and Q = (xQ,yQ) ∈ G2.
Output: e(P,Q) ∈ G3.
f ← 1;
for i = 1 upto m do
    xP ← x3
          P, yP ← y3
                   P;
    µ ← xP +xQ +b;
    λ ← −yPyQσ − µ2;
    g ← λ − µρ −ρ2;
    f ← f · g;
    xQ ← x1/3
          Q , yQ ← y1/3
                      Q ;
end
return f p3−1;
```
#### 2.3. The η and η*<sup>G</sup>* pairings

Barreto et al. [12] introduce the η pairing by generalising the Duursma-Lee pairing to allow use of supersingular curves over finite fields of any small characteristic; Kwon [49] independently used the same approach and in both cases characteristic two is of specific interest. The η pairing has already a simple final powering, but work done by Galbraith et al. [38] (see [59, Section 5.4]) demonstrates that it can be eliminated entirely; the crucial step is the lack of normal denominator elimination, which is enabled by evaluation of additional line functions. Interestingly, the analysis of this approach demonstrates no negative security implication in terms of pairing inversion and so on. We follow Whelan and Scott [71] by terming this approach to the η*<sup>G</sup>* pairing.

For **F***<sup>p</sup>* with *p* = 2*<sup>m</sup>* and *k* = 4, suitable curves are defined by an equation of the form

$$E: \mathbf{y}^2 + \mathbf{y} = \mathbf{x}^3 + \mathbf{x} + \mathbf{b}$$

Algorithm 3: The η pairing algorithm.

4 Theory and Practice of Cryptography and Network Security Protocols and Technologies

2.2. The Duursma-Lee pairing

of characteristic three and adapt it to pairing.

We therefore obtain the reduced Ate pairing *fT*,*Q*(*P*)(*pk*−1)/*<sup>r</sup>* which is a power of the Tate pairing. As the trace *<sup>t</sup>* is in average of size <sup>√</sup>*p*, for *<sup>r</sup>* <sup>∼</sup> *<sup>p</sup>*, the loop length of Miller's algorithm when computing the

Duursma and Lee use a family of hyperelliptic curves including supersingular curves over finite fields

*E* : *y*<sup>2</sup> = *x*<sup>3</sup> −*x*+*b*,

with *<sup>b</sup>* = <sup>±</sup><sup>1</sup> <sup>∈</sup> **<sup>F</sup>**3. If **<sup>F</sup>***p*<sup>3</sup> = **<sup>F</sup>***p*[ρ]/(ρ<sup>3</sup> <sup>−</sup> <sup>ρ</sup> <sup>−</sup> *<sup>b</sup>*), and **<sup>F</sup>***p*<sup>6</sup> = **<sup>F</sup>***p*<sup>3</sup> [σ]/(σ<sup>2</sup> + <sup>1</sup>) then the distortion map <sup>φ</sup> : *<sup>E</sup>*(**F***p*) <sup>→</sup> *<sup>E</sup>*(**F***p*<sup>6</sup> ) is defined by <sup>φ</sup>(*x*,*y*)=(<sup>ρ</sup> <sup>−</sup>*x*,σ*y*). Then, setting **<sup>G</sup>**<sup>1</sup> = **<sup>G</sup>**<sup>2</sup> = *<sup>E</sup>*(**F**3*<sup>m</sup>* ) and

Barreto et al. [12] introduce the η pairing by generalising the Duursma-Lee pairing to allow use of supersingular curves over finite fields of any small characteristic; Kwon [49] independently used the same approach and in both cases characteristic two is of specific interest. The η pairing has already a simple final powering, but work done by Galbraith et al. [38] (see [59, Section 5.4]) demonstrates that it can be eliminated entirely; the crucial step is the lack of normal denominator elimination, which is enabled by evaluation of additional line functions. Interestingly, the analysis of this approach demonstrates no negative security implication in terms of pairing inversion and so on. We follow

*E* : *y*<sup>2</sup> +*y* = *x*<sup>3</sup> +*x*+*b*

Ate pairing is obviously going to be two times shorter than the loop length for the Tate pairing.

For **F***<sup>p</sup>* with *p* = 3*<sup>m</sup>* and *k* = 6, suitable curves are defined by an equation of the form

**G**<sup>3</sup> = **F***p*<sup>6</sup> , Algorithm 2 computes an admissible, symmetric pairing.

Algorithm 2: The Duursma-Lee pairing algorithm. Input : *P* = (*xP*,*yP*) ∈ **G**<sup>1</sup> and *Q* = (*xQ*,*yQ*) ∈ **G**2.

Output: *e*(*P*,*Q*) ∈ **G**3.

for *i* = 1 upto *m* do *xP* ← *x*<sup>3</sup>

*<sup>P</sup>*, *yP* <sup>←</sup> *<sup>y</sup>*<sup>3</sup>

*<sup>Q</sup>* , *yQ* <sup>←</sup> *<sup>y</sup>*1/<sup>3</sup>

2.3. The η and η*<sup>G</sup>* pairings

*<sup>Q</sup>* ;

Whelan and Scott [71] by terming this approach to the η*<sup>G</sup>* pairing.

For **F***<sup>p</sup>* with *p* = 2*<sup>m</sup>* and *k* = 4, suitable curves are defined by an equation of the form

µ ← *xP* +*xQ* +*b*; λ ← −*yPyQ*σ − µ2; *g* ← λ − µρ −ρ2; *f* ← *f* · *g*; *xQ* <sup>←</sup> *<sup>x</sup>*1/<sup>3</sup>

*P*;

*f* ← 1;

end

return *f <sup>p</sup>*3<sup>−</sup>1;

Input : *P* = (*xP*,*yP*) ∈ **G**<sup>1</sup> and *Q* = (*xQ*,*yQ*) ∈ **G**2. Output: *e*(*P*,*Q*) ∈ **G**3. *f* ← 1; for *i* = 1 upto *m* do *xP* ← *x*<sup>2</sup> *<sup>P</sup>*, *yP* <sup>←</sup> *<sup>y</sup>*<sup>2</sup> *P*; µ ← *xP* +*xQ*; λ ← µ +*xPxQ* +*yP* +*yQ* +*b*; *g* ← λ + µ*t* + (µ +1)*t* 2; *f* ← *f* · *g*; *xQ* <sup>←</sup> *<sup>x</sup>*1/<sup>2</sup> *<sup>Q</sup>* , *yQ* <sup>←</sup> *<sup>y</sup>*1/<sup>2</sup> *<sup>Q</sup>* ; end return *f <sup>p</sup>*2<sup>−</sup>1;

Algorithm 4: The η*<sup>G</sup>* pairing algorithm. Input : *P* = (*xP*,*yP*) ∈ **G**<sup>1</sup> and *Q* = (*xQ*,*yQ*) ∈ **G**2. Output: *e*(*P*,*Q*) ∈ **G**3.

with *<sup>b</sup>* <sup>∈</sup> **<sup>F</sup>**2. If **<sup>F</sup>***p*<sup>2</sup> = **<sup>F</sup>***p*[*s*]/(*s*<sup>2</sup> + *<sup>s</sup>* + <sup>1</sup>) and **<sup>F</sup>***p*<sup>4</sup> = **<sup>F</sup>***p*<sup>2</sup> [*t*]/(*<sup>t</sup>* <sup>2</sup> + *t* + *s*) then the distortion map <sup>φ</sup> : *<sup>E</sup>*(**F***p*) <sup>→</sup> *<sup>E</sup>*(**F***p*<sup>4</sup> ) is defined by <sup>φ</sup>(*x*,*y*)=(*<sup>x</sup>* + *<sup>s</sup>*2,*<sup>y</sup>* + *sx* +*t*). Note that *<sup>s</sup>* = *<sup>t</sup>* <sup>5</sup> and that *t* satisfies *t* <sup>4</sup> = *<sup>t</sup>* +1, so we can also represent **<sup>F</sup>***p*<sup>4</sup> as **<sup>F</sup>***p*[*t*]/(*<sup>t</sup>* <sup>4</sup> +*t* +1). Then, by setting **G**<sup>1</sup> = **G**<sup>2</sup> = *E*(**F***p*) and **G**<sup>3</sup> = **F***p*<sup>4</sup> , Algorithm 4 computes an admissible, symmetric pairing.

Historically, the Weil and Tate pairing was developed by mathematicians without any consideration for cryptography. As efficient implementation of pairings become an interesting question for cryptographers, they searched for improving these two pairings. The Ate and twisted Ate pairing were improvement of the Tate pairing, throught mathematical properties [43]. The notion of Optimal pairing [70] and pairing lattices [42] are the latest properties of pairing. The number of iterations is reduced to the minimum in [70]. In [42], F. Hess proves that every pairing are in relation, because the different pairings are in fact element of a lattice in which each pairing is a power of another pairing. To present the following Sections, we work over the Tate pairing, since as any optimizations of the Tate pairing can be easily adapted to others pairings.

#### 2.4. Analysis of the arithmetic

In order to present the different existing options for the optimizations of a pairing computation, we will focus on the Miller's algorithm. Among the several algorithms which exist to compute a pairing, the most efficient implementations are obtained with the Miller's algorithm.

Let *P* = (*XP*,*YP*) be a point in affine coordinates of the set *E*(**F***p*)[*r*] (or in Jacobian coordinates with *ZP* = 1). We consider the point *p* of order *r* in *E*(**F***pk* ), also given in affine coordinates (*xQ*,*yQ*). Let **G**<sup>1</sup> =< *P* > be the subgroup of order *r* of *E*(**F***p*) generated by the point *P* and **G**<sup>2</sup> =< *Q* > the subgroup of order *<sup>r</sup>* of *<sup>E</sup>*(**F***pk* ). We want to compute a pairing between **<sup>G</sup>**<sup>1</sup> and **<sup>G</sup>**2, under the condition **<sup>G</sup>**<sup>1</sup> �= **<sup>G</sup>**2. The group **<sup>G</sup>**<sup>3</sup> is a subgroup of order *<sup>r</sup>* of **<sup>F</sup>**<sup>⋆</sup> *pk* .

Let *T* = (*XT* ,*YT* ,*ZT* ) be a point of *E*(**F***pk* ) in Jacobian coordinates. The main advantage of Jacobian coordinates is that there is no inversion during the arithmetical operation over the elliptic curve.

10.5772/56295

57

*PB*2. (4)

http://dx.doi.org/10.5772/56295

*X*2*<sup>T</sup>* = *B*<sup>2</sup> −2*A*, *Y*2*<sup>T</sup>* = *B*(*A*−*X*2*<sup>T</sup>* )−2*C*2, *Z*2*<sup>T</sup>* = 2*YT ZT* . (2)

*<sup>P</sup>* (*XPD*+*XT <sup>Z</sup>*<sup>2</sup>

*<sup>T</sup>* ). This precomputation reduce the cost of the doubling step, considering

*<sup>P</sup>*(*Z*2*<sup>T</sup> DyQ* −*B*(*DxQ* −*XT* )−2*YT* ) (3)

Efficient Computation for Pairing Based Cryptography: A State of the Art

*P*)−*Z*<sup>2</sup>

In this case, the expressions of *<sup>l</sup>*<sup>1</sup> and *<sup>v</sup>*1, for *<sup>Q</sup>* = (*xQ*,*yQ*) <sup>∈</sup> *<sup>E</sup>*(**F***pk* ) are given by

<sup>2</sup>*<sup>T</sup> ZPxQ* +4*Y*<sup>2</sup>

Doubling of a point over *E* 4*Ap* +3*Subp* + *Ma* +4*Sp* +4*Mp* Evaluation of *l*<sup>1</sup> 2*Subp* +*SubpkSP* + (3+3*k*)*MP* Evaluation of *v*<sup>1</sup> 2*Ap* +*Subp* +3*SP* + (5+*k*)*MP*

mathematics, in Section 5 the optimizations related with algorithmical breakout.

We could remark that some intermediary results of the previous formulas may be reused, for instance

Let *Ape* (respectively *Subpe* , *Sqpe* and *Mpe* ) denote an addition (respectively a subtraction, a squaring and a multiplication) in the finite field **F***pe* , for *e* a natural integer. Let also *Ma* be the cost of a multiplication by *a*. The Table 1 gives the cost of each operation occurring in the computation of the doubling step. Each cost is given in number of operations over the finite fields. We optimize the computation as possible without any trick different from the one which are following. We consider that a multiplication by 2 is nothing more than a shift in binary representation and thus may be neglected. As a consequence, a multiplication by 3 can be seen as a multiplication by 2 plus an addition and then a multiplication by

Step 1 in Algorithm 5 6*Ap* +4*Subp* +*Subpk* +8*Sp* + (12+4*k*)*Mp* +2*Spk* +2*Mpk*

We will present in Section 4 the optimizations related with mathematics and in Section the optimization in pairings related with the arithmetic of finite fields, in Section 4 the optimizations related with

The first use of pairing in cryptography was destructive: in [53] the Weil pairing was used to shift the discrete logarithm problem from an elliptic curve to a finite field. As the discrete logarithm problem is more easily solved over a finite field than over an elliptic curve, the MOV attack consists in transfering a hard problem over a structure where the same problem is easier. The MOV attack is named after its authors Menezes Okamoto and Vanstome. Later on the pairing was used to improve existing protocols as tri-partite Diffie Hellman key exchange [45] and to construct original protocol like identity based

The aim of identity based encryption is that a person λ, even if λ does not know anything about cryptography, is able to receive and more importantly to read an encrypted message with almost no

*l*1(*xQ*,*yQ*) = *Z*<sup>2</sup>

*v*1(*xQ*,*yQ*) = *Z*<sup>2</sup>

*Y*2 *<sup>T</sup>* , *<sup>Z</sup>*<sup>2</sup>

*<sup>T</sup>* , 4*XTY*<sup>2</sup>

*<sup>T</sup>* , (3*X*<sup>2</sup>

3 is equivalent to an addition.

**Table 1.** Cost of the doubling step in Miller's algorithm

3. Pairing based cryptography

encryption [19, 21].

help.

*<sup>T</sup>* + *aZ*<sup>4</sup>

the number of operations over the finite field **F***p*.

The Miller's algorithm is given in Algorithm 5.

#### Algorithm 5: Miller(*P*,*Q*,*r*)

Données: *<sup>r</sup>* = (*rn* ...*r*0)(binary representation), *<sup>P</sup>* <sup>∈</sup> **<sup>G</sup>**1(<sup>⊂</sup> *<sup>E</sup>*(**F***p*)) and *<sup>Q</sup>* <sup>∈</sup> **<sup>G</sup>**2(<sup>⊂</sup> *<sup>E</sup>*(**F***pk* )); Résultat: *fr*,*P*(*Q*) <sup>∈</sup> **<sup>G</sup>**3(<sup>⊂</sup> **<sup>F</sup>**<sup>⋆</sup> *pk* ); 1. *T* ← *P* ; 2. *f*<sup>1</sup> ← 1 ; 3. *f*<sup>2</sup> ← 1 ; for *i* = *n*−1 to 0 do <sup>1</sup> 4. *T* ← [2]*T*; 5. *f*<sup>1</sup> ←− *f*<sup>1</sup> <sup>2</sup> ×*l*1(*Q*), *l*<sup>1</sup> is the tangent at point *T* of *E*. ; 6. *f*<sup>2</sup> ←− *f*<sup>2</sup> <sup>2</sup> ×*v*1(*Q*), *v*<sup>1</sup> is the vertical line at point [2]*T*. ; ( *Div*( *<sup>l</sup>*<sup>1</sup> *<sup>v</sup>*<sup>1</sup> ) = <sup>2</sup>(*<sup>T</sup>* )−([2]*<sup>T</sup>* )−*P*∞); <sup>2</sup> if *ni* = 1 then 7. *T* ← *T* +*P* ; 8. *f*<sup>1</sup> ←− *f*<sup>1</sup> ×*l*2(*Q*), *l*<sup>2</sup> is the line (*PT* ) ; 9. *f*<sup>2</sup> ←− *f*<sup>2</sup> ×*v*2(*Q*), *v*<sup>2</sup> is the vertical line at *P*+*T* ; ( *Div*( *<sup>l</sup>*<sup>2</sup> *<sup>v</sup>*<sup>2</sup> )=(*<sup>T</sup>* ) +*DP* <sup>−</sup>((*<sup>T</sup>* )⊕*DP*)−*P*∞); end return *<sup>f</sup>*<sup>1</sup> *f*2 end

The functions *<sup>l</sup>*1(*Q*), *<sup>l</sup>*2(*Q*), *<sup>v</sup>*1(*Q*) and *<sup>v</sup>*2(*Q*) occurring in Miller's algorithm have their images in **<sup>F</sup>**<sup>⋆</sup> *pk* . The parameters *<sup>f</sup>*<sup>1</sup> and *<sup>f</sup>*<sup>2</sup> are elements of **<sup>F</sup>**<sup>⋆</sup> *pk* .

The order *r* of the subgroups is chosen with a very sparse binary decomposition. In this case, the addition step in Miller's algorithm is not often executed, whereas the doubling step is computed for every iteration of the Miller's algorithm. As a consequence, we consider that the complexity of Miller's algorithm is approximately given by the doubling step. So we will only consider the computation of *l*<sup>1</sup> and *v*<sup>1</sup> in the complexity evaluation of Miller's algorithm.

In a general case, we consider that the equation of the elliptic curve is given into the Weierstrass form *E* : *Y*<sup>2</sup> = *X*<sup>3</sup> + *aXZ*<sup>4</sup> + *bZ*6, with *a* and *b* elements of **F***p*. In order to be very general, we consider *a* and *b* ordinary. Indeed, it is possible to consider that *a* = −3 [20] and the value of *b* is also a vector of optimizations, but we do not take in consideration these options. We denote *P* = (*XP*,*YP*), *T* = (*XT* ,*YT* ,*ZT* ) is the current point in the Miller's algorithm and 2*T* = (*X*2*<sup>T</sup>* ,*Y*2*<sup>T</sup>* ,*Z*2*<sup>T</sup>* ) the doubling of *T*.

The formulas of the doubling in Jacobian coordinates are the following [25]

$$C = 2Y\_T^2, \; D = Z\_T^2, \; A = 4X\_T Y\_T^2 = 2X\_T C, \; B = \left(3X\_T^2 + aZ\_T^4\right) \tag{1}$$

$$X\_{2T} = B^2 - 2A, \quad Y\_{2T} = B(A - X\_{2T}) - 2C^2, \quad Z\_{2T} = 2Y\_T Z\_T. \tag{2}$$

In this case, the expressions of *<sup>l</sup>*<sup>1</sup> and *<sup>v</sup>*1, for *<sup>Q</sup>* = (*xQ*,*yQ*) <sup>∈</sup> *<sup>E</sup>*(**F***pk* ) are given by

$$d\_1(\mathbf{x}\_{\underline{Q}}, \mathbf{y}\_{\underline{Q}}) = Z\_P^2 (Z\_{2T} D\mathbf{y}\_{\underline{Q}} - B(D\mathbf{x}\_{\underline{Q}} - \mathbf{X}\_T) - 2\mathbf{Y}\_T) \tag{3}$$

$$\text{Cov}(\mathbf{x}\_{\mathcal{Q}}, \mathbf{y}\_{\mathcal{Q}}) = Z\_{2T}^2 \mathbf{Z}\_P \mathbf{x}\_{\mathcal{Q}} + 4Y\_P^2 (\mathbf{X}\_P \mathbf{D} + \mathbf{X}\_T \mathbf{Z}\_P^2) - Z\_P^2 \mathbf{B}^2. \tag{4}$$

We could remark that some intermediary results of the previous formulas may be reused, for instance *Y*2 *<sup>T</sup>* , *<sup>Z</sup>*<sup>2</sup> *<sup>T</sup>* , 4*XTY*<sup>2</sup> *<sup>T</sup>* , (3*X*<sup>2</sup> *<sup>T</sup>* + *aZ*<sup>4</sup> *<sup>T</sup>* ). This precomputation reduce the cost of the doubling step, considering the number of operations over the finite field **F***p*.

Let *Ape* (respectively *Subpe* , *Sqpe* and *Mpe* ) denote an addition (respectively a subtraction, a squaring and a multiplication) in the finite field **F***pe* , for *e* a natural integer. Let also *Ma* be the cost of a multiplication by *a*. The Table 1 gives the cost of each operation occurring in the computation of the doubling step. Each cost is given in number of operations over the finite fields. We optimize the computation as possible without any trick different from the one which are following. We consider that a multiplication by 2 is nothing more than a shift in binary representation and thus may be neglected. As a consequence, a multiplication by 3 can be seen as a multiplication by 2 plus an addition and then a multiplication by 3 is equivalent to an addition.


**Table 1.** Cost of the doubling step in Miller's algorithm

6 Theory and Practice of Cryptography and Network Security Protocols and Technologies

The Miller's algorithm is given in Algorithm 5.

*<sup>v</sup>*<sup>1</sup> ) = <sup>2</sup>(*<sup>T</sup>* )−([2]*<sup>T</sup>* )−*P*∞);

The parameters *<sup>f</sup>*<sup>1</sup> and *<sup>f</sup>*<sup>2</sup> are elements of **<sup>F</sup>**<sup>⋆</sup>

8. *f*<sup>1</sup> ←− *f*<sup>1</sup> ×*l*2(*Q*), *l*<sup>2</sup> is the line (*PT* ) ;

and *v*<sup>1</sup> in the complexity evaluation of Miller's algorithm.

*C* = 2*Y*<sup>2</sup>

The formulas of the doubling in Jacobian coordinates are the following [25]

*<sup>T</sup>* , *<sup>A</sup>* = <sup>4</sup>*XTY*<sup>2</sup>

*<sup>T</sup>* = <sup>2</sup>*XTC*, *<sup>B</sup>* = (3*X*<sup>2</sup>

*<sup>T</sup>* +*aZ*<sup>4</sup>

*<sup>T</sup>* ) (1)

*<sup>T</sup>* , *<sup>D</sup>* = *<sup>Z</sup>*<sup>2</sup>

9. *f*<sup>2</sup> ←− *f*<sup>2</sup> ×*v*2(*Q*), *v*<sup>2</sup> is the vertical line at *P*+*T* ;

*<sup>v</sup>*<sup>2</sup> )=(*<sup>T</sup>* ) +*DP* <sup>−</sup>((*<sup>T</sup>* )⊕*DP*)−*P*∞);

*pk* );

<sup>2</sup> ×*l*1(*Q*), *l*<sup>1</sup> is the tangent at point *T* of *E*. ;

<sup>2</sup> ×*v*1(*Q*), *v*<sup>1</sup> is the vertical line at point [2]*T*. ;

Algorithm 5: Miller(*P*,*Q*,*r*)

Résultat: *fr*,*P*(*Q*) <sup>∈</sup> **<sup>G</sup>**3(<sup>⊂</sup> **<sup>F</sup>**<sup>⋆</sup>

7. *T* ← *T* +*P* ;

( *Div*( *<sup>l</sup>*<sup>2</sup>

1. *T* ← *P* ; 2. *f*<sup>1</sup> ← 1 ; 3. *f*<sup>2</sup> ← 1 ;

for *i* = *n*−1 to 0 do <sup>1</sup> 4. *T* ← [2]*T*; 5. *f*<sup>1</sup> ←− *f*<sup>1</sup>

6. *f*<sup>2</sup> ←− *f*<sup>2</sup>

( *Div*( *<sup>l</sup>*<sup>1</sup>

<sup>2</sup> if *ni* = 1 then

end return *<sup>f</sup>*<sup>1</sup> *f*2

end

of *T*.

Let *T* = (*XT* ,*YT* ,*ZT* ) be a point of *E*(**F***pk* ) in Jacobian coordinates. The main advantage of Jacobian coordinates is that there is no inversion during the arithmetical operation over the elliptic curve.

Données: *<sup>r</sup>* = (*rn* ...*r*0)(binary representation), *<sup>P</sup>* <sup>∈</sup> **<sup>G</sup>**1(<sup>⊂</sup> *<sup>E</sup>*(**F***p*)) and *<sup>Q</sup>* <sup>∈</sup> **<sup>G</sup>**2(<sup>⊂</sup> *<sup>E</sup>*(**F***pk* ));

The functions *<sup>l</sup>*1(*Q*), *<sup>l</sup>*2(*Q*), *<sup>v</sup>*1(*Q*) and *<sup>v</sup>*2(*Q*) occurring in Miller's algorithm have their images in **<sup>F</sup>**<sup>⋆</sup>

The order *r* of the subgroups is chosen with a very sparse binary decomposition. In this case, the addition step in Miller's algorithm is not often executed, whereas the doubling step is computed for every iteration of the Miller's algorithm. As a consequence, we consider that the complexity of Miller's algorithm is approximately given by the doubling step. So we will only consider the computation of *l*<sup>1</sup>

In a general case, we consider that the equation of the elliptic curve is given into the Weierstrass form *E* : *Y*<sup>2</sup> = *X*<sup>3</sup> + *aXZ*<sup>4</sup> + *bZ*6, with *a* and *b* elements of **F***p*. In order to be very general, we consider *a* and *b* ordinary. Indeed, it is possible to consider that *a* = −3 [20] and the value of *b* is also a vector of optimizations, but we do not take in consideration these options. We denote *P* = (*XP*,*YP*), *T* = (*XT* ,*YT* ,*ZT* ) is the current point in the Miller's algorithm and 2*T* = (*X*2*<sup>T</sup>* ,*Y*2*<sup>T</sup>* ,*Z*2*<sup>T</sup>* ) the doubling

*pk* .

*pk* .

We will present in Section 4 the optimizations related with mathematics and in Section the optimization in pairings related with the arithmetic of finite fields, in Section 4 the optimizations related with mathematics, in Section 5 the optimizations related with algorithmical breakout.

#### 3. Pairing based cryptography

The first use of pairing in cryptography was destructive: in [53] the Weil pairing was used to shift the discrete logarithm problem from an elliptic curve to a finite field. As the discrete logarithm problem is more easily solved over a finite field than over an elliptic curve, the MOV attack consists in transfering a hard problem over a structure where the same problem is easier. The MOV attack is named after its authors Menezes Okamoto and Vanstome. Later on the pairing was used to improve existing protocols as tri-partite Diffie Hellman key exchange [45] and to construct original protocol like identity based encryption [19, 21].

The aim of identity based encryption is that a person λ, even if λ does not know anything about cryptography, is able to receive and more importantly to read an encrypted message with almost no help.

The public key of λ is its identity, its private key would be send to λ by a trusted authority T. This trusted authority will have all the private keys related with the identity based protocol.

10.5772/56295

59

http://dx.doi.org/10.5772/56295

Theorem 4.2. *Let E be an elliptic curve of equation y*<sup>2</sup> = *<sup>x</sup>*<sup>3</sup> + *ax* + *b defined over* **<sup>F</sup>***pk . Following the value of k, the possible degrees d of twists are 2, 3, 4 and 6. Let E*′ *be a twist of E, the morphism*

Efficient Computation for Pairing Based Cryptography: A State of the Art

• *<sup>d</sup>* <sup>=</sup> 2, *<sup>E</sup>*′ : *Dy*<sup>2</sup> <sup>=</sup> *<sup>x</sup>*<sup>3</sup> <sup>+</sup> *ax* <sup>+</sup> *<sup>b</sup>* defined over **<sup>F</sup>***pk*/<sup>2</sup> , where *<sup>D</sup>* <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> is not a quadratic residue, i.e. such that the polynomial *<sup>X</sup>*<sup>2</sup> <sup>−</sup>*<sup>D</sup>* has no solution over **<sup>F</sup>***pk*/<sup>2</sup> . The morphism <sup>Φ</sup>*<sup>d</sup>* is defined by

<sup>Φ</sup>*<sup>d</sup>* (*x*,*y*) <sup>→</sup> (*x*,*yD*1/2).

<sup>Φ</sup>*<sup>d</sup>* (*x*,*y*) <sup>→</sup> (*xD*1/2,*yD*3/4).

<sup>Φ</sup>*<sup>d</sup>* (*x*,*y*) <sup>→</sup> (*xD*1/3,*yD*1/2).

Considering the definition above, an elliptic curve can admit a twist of degree 2, 3, 4 or 6. We will only consider here the twisted elliptic curve for an even degree. In order to simplify the notations, we will consider a twist of degree 2. The same method can be applied for twists of degree 4 and 6. The case of twist of degree 3 is a little different, but can also be considered, we refer to [31] for more details. Using a twisted elliptic curve of *E*(**F***pk* ) allows to make some computation of the Miller's algorithm in a subfield of **F***pk* , instead of **F***pk* and thus allows to simplify the computation. Using a twisted elliptic curve is the solution to avoid the denominators in the Miller's algorithm (i.e. the update of the function *<sup>f</sup>*2). We will denote *<sup>E</sup>*(**F***pk*/<sup>2</sup> ) the twisted curve of *<sup>E</sup>*(**F***pk* ), for an even *<sup>k</sup>*. We could remark that the twisted elliptic curve of *E* is an elliptic curve define over an extension of degree half of the initial extension (**F***pk* ) [11]. Let <sup>ν</sup> <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> a non square element in **<sup>F</sup>***pk*/<sup>2</sup> , then <sup>√</sup><sup>ν</sup> is an element of **<sup>F</sup>***pk* \**F***pk*/<sup>2</sup> . We can define *<sup>E</sup>* the twisted elliptic curve of *<sup>E</sup>*(**F***pk* ) of equation <sup>ν</sup>*y*<sup>2</sup> <sup>=</sup> *<sup>x</sup>*<sup>3</sup> <sup>−</sup>3*<sup>x</sup>* <sup>+</sup> *<sup>b</sup>*. The

> <sup>Ψ</sup><sup>2</sup> : *<sup>E</sup>*(**F***pk*/<sup>2</sup> ) <sup>→</sup> *<sup>E</sup>*(**F***pk* ) (*x*,*y*) → (*x*,*y*

generate by *P* ∈ *E*(**F***p*) is negligeable [11]. This assures us that the pairing is non degenerated between

<sup>√</sup>ν).

<sup>√</sup>ν) image of *<sup>Q</sup>*′ = (*x*,*y*) <sup>∈</sup> *<sup>E</sup>* by <sup>Ψ</sup><sup>2</sup> belongs to the subgroup

• *d* = 3 (resp. 6), the curve *E* has a twist of degree 3 or 6 if and only if *a* = 0. The equation of

*<sup>D</sup> <sup>x</sup>*, where *<sup>D</sup>* is not a residue of degree 4, i.e. *<sup>D</sup>* is not solution in **<sup>F</sup>***pk*/<sup>4</sup> of a polynomial

*<sup>D</sup>* , where *D* is not a residue of degree 3 (resp. 6), i.e. *D* is not solution of a

• *d* = 4. The elliptic curve *E* has a twist of degree 4 if and only if *b* = 0. The equation of *E*′ is then

<sup>Φ</sup>*<sup>d</sup>* : *<sup>E</sup>*′ <sup>→</sup> *<sup>E</sup>*

<sup>Φ</sup>*<sup>d</sup>* : *<sup>E</sup>*′ <sup>→</sup> *<sup>E</sup>*

<sup>Φ</sup>*<sup>d</sup>* : *<sup>E</sup>*′ <sup>→</sup> *<sup>E</sup>*

polynomial *X*<sup>3</sup> −*D* (resp. *X*<sup>6</sup> −*D*). The morphism is then

morphism mapping *<sup>E</sup>*(**F***pk*/<sup>2</sup> ) to *<sup>E</sup>*(**F***pk* ) is <sup>Ψ</sup><sup>2</sup> define by

The probability that the point *Q* = (*x*,*y*

*between E and E*′ *is one of the following.*

*X*<sup>4</sup> −*D*. The morphism is then

*E*′ is then *y*<sup>2</sup> = *x*<sup>3</sup> + *<sup>b</sup>*

*y*<sup>2</sup> = *x*<sup>3</sup> + *<sup>a</sup>*

The general scheme of identity based encryption is the following.

The public data are an elliptic curve *E* over a finite field **F***p*, a pairing *e*ˆ and a hash function *H*, this hash function associates a point of *E*(**F***p*) to an identity: *H* : {*Identity*} → *E*(**F***p*). We consider that two person Alice and Bob want to exchange a common secret for use it as a key in a secure communication.

With the public data, Alice can compute *QB* = *H*(*Bob*) the public key of Bob and Bob can compute *QA* = *H*(*Alice*) the public key of Alice.

Alice and Bob request the trusted authority to receive their secret key. The secret key is a point of *E*(**F***p*).

The trusted authority chooses *s*, as its secret key, then it generates *PA* = [*s*]*QA* the secret key of Alice and *PB* = [*s*]*QB* the secret key of Bob.

Then, Alice (respectively Bob) can compute *e*ˆ(*PA*,*QB*) (resp. *e*ˆ(*QA*,*PB*), by bilinearity, Alice and Bob have calculated the same key: *e*ˆ(*QA*,*QB*)[*s*] . Indeed:

$$\hat{e}([s]H(A),H(B)) = \hat{e}(H(A),[s]H(B)) = \hat{e}(H(A),H(B))^{[s]}.$$

#### 4. Mathematical optimizations

We recall here the mathematical optimizations of pairings. As a pairing is defined over an elliptic curve which is an abelian variety, the first optimization for a pairing computation comes from the mathematical background of pairings. We will use the twist of an elliptic curve, the pairing friendly elliptic curve will follow. We will consider the cyclotomic subgroup of a finite field and then how the final exponentiation in a pairing computation can be improve.

#### 4.1. The twist of an elliptic curve

The twisted elliptic curve of *E* is another elliptic curve isomorphic to *E*. Using twisted elliptic curves (when it is possible) in pairing based cryptography is a way to avoid the denominator evaluation in Miller's algorithm. The execution of Miller's algorithm involves computation over *E*(**F***pk* ), considering a twist of degree *<sup>d</sup>* of *<sup>E</sup>*(**F***pk* ) allows some computations to be executed in *<sup>E</sup>*˜(**F***pk*/*<sup>d</sup>* ), where *<sup>E</sup>*˜(**F***pk*/*<sup>d</sup>* ) is the twisted elliptic curve of *E*(**F***pk* ) [64].

Definition 4.1. Let *E* and *E*′ be two elliptic curves, the elliptic curve *E*′ is a twisted elliptic curve of *<sup>E</sup>* if there exists an isomorphisme <sup>Φ</sup> defined over **<sup>F</sup>***<sup>p</sup>* mapping each point of *<sup>E</sup>*′ to a point of *<sup>E</sup>*.

There is a limited number of twisted elliptic curves of *E*. The number of twisted curves depends on the finite field on which the elliptic curve *E* is defined. The Theorem 4.2 from [64] gives the classification of the possible twists.

Theorem 4.2. *Let E be an elliptic curve of equation y*<sup>2</sup> = *<sup>x</sup>*<sup>3</sup> + *ax* + *b defined over* **<sup>F</sup>***pk . Following the value of k, the possible degrees d of twists are 2, 3, 4 and 6. Let E*′ *be a twist of E, the morphism between E and E*′ *is one of the following.*

8 Theory and Practice of Cryptography and Network Security Protocols and Technologies

The general scheme of identity based encryption is the following.

*QA* = *H*(*Alice*) the public key of Alice.

and *PB* = [*s*]*QB* the secret key of Bob.

have calculated the same key: *e*ˆ(*QA*,*QB*)[*s*]

4. Mathematical optimizations

4.1. The twist of an elliptic curve

is the twisted elliptic curve of *E*(**F***pk* ) [64].

of the possible twists.

final exponentiation in a pairing computation can be improve.

*E*(**F***p*).

trusted authority will have all the private keys related with the identity based protocol.

The public key of λ is its identity, its private key would be send to λ by a trusted authority T. This

The public data are an elliptic curve *E* over a finite field **F***p*, a pairing *e*ˆ and a hash function *H*, this hash function associates a point of *E*(**F***p*) to an identity: *H* : {*Identity*} → *E*(**F***p*). We consider that two person Alice and Bob want to exchange a common secret for use it as a key in a secure communication. With the public data, Alice can compute *QB* = *H*(*Bob*) the public key of Bob and Bob can compute

Alice and Bob request the trusted authority to receive their secret key. The secret key is a point of

The trusted authority chooses *s*, as its secret key, then it generates *PA* = [*s*]*QA* the secret key of Alice

Then, Alice (respectively Bob) can compute *e*ˆ(*PA*,*QB*) (resp. *e*ˆ(*QA*,*PB*), by bilinearity, Alice and Bob

*e*ˆ([*s*]*H*(*A*),*H*(*B*)) = *e*ˆ(*H*(*A*),[*s*]*H*(*B*)) = *e*ˆ(*H*(*A*),*H*(*B*))[*s*]

We recall here the mathematical optimizations of pairings. As a pairing is defined over an elliptic curve which is an abelian variety, the first optimization for a pairing computation comes from the mathematical background of pairings. We will use the twist of an elliptic curve, the pairing friendly elliptic curve will follow. We will consider the cyclotomic subgroup of a finite field and then how the

The twisted elliptic curve of *E* is another elliptic curve isomorphic to *E*. Using twisted elliptic curves (when it is possible) in pairing based cryptography is a way to avoid the denominator evaluation in Miller's algorithm. The execution of Miller's algorithm involves computation over *E*(**F***pk* ), considering a twist of degree *<sup>d</sup>* of *<sup>E</sup>*(**F***pk* ) allows some computations to be executed in *<sup>E</sup>*˜(**F***pk*/*<sup>d</sup>* ), where *<sup>E</sup>*˜(**F***pk*/*<sup>d</sup>* )

Definition 4.1. Let *E* and *E*′ be two elliptic curves, the elliptic curve *E*′ is a twisted elliptic curve of *<sup>E</sup>* if there exists an isomorphisme <sup>Φ</sup> defined over **<sup>F</sup>***<sup>p</sup>* mapping each point of *<sup>E</sup>*′ to a point of *<sup>E</sup>*.

There is a limited number of twisted elliptic curves of *E*. The number of twisted curves depends on the finite field on which the elliptic curve *E* is defined. The Theorem 4.2 from [64] gives the classification

.

. Indeed:

• *<sup>d</sup>* <sup>=</sup> 2, *<sup>E</sup>*′ : *Dy*<sup>2</sup> <sup>=</sup> *<sup>x</sup>*<sup>3</sup> <sup>+</sup> *ax* <sup>+</sup> *<sup>b</sup>* defined over **<sup>F</sup>***pk*/<sup>2</sup> , where *<sup>D</sup>* <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> is not a quadratic residue, i.e. such that the polynomial *<sup>X</sup>*<sup>2</sup> <sup>−</sup>*<sup>D</sup>* has no solution over **<sup>F</sup>***pk*/<sup>2</sup> . The morphism <sup>Φ</sup>*<sup>d</sup>* is defined by

$$\begin{aligned} \Phi\_d: & E' \to E \\ \Phi\_d(\mathbf{x}, \mathbf{y}) \to (\mathbf{x}, \mathbf{y}D^{1/2}). \end{aligned}$$

• *d* = 4. The elliptic curve *E* has a twist of degree 4 if and only if *b* = 0. The equation of *E*′ is then *y*<sup>2</sup> = *x*<sup>3</sup> + *<sup>a</sup> <sup>D</sup> <sup>x</sup>*, where *<sup>D</sup>* is not a residue of degree 4, i.e. *<sup>D</sup>* is not solution in **<sup>F</sup>***pk*/<sup>4</sup> of a polynomial *X*<sup>4</sup> −*D*. The morphism is then

$$\begin{aligned} \Phi\_d: & E' \to E \\ \Phi\_d(\mathbf{x}, \mathbf{y}) \to (\mathbf{x}D^{1/2}, \mathbf{y}D^{3/4}). \end{aligned}$$

• *d* = 3 (resp. 6), the curve *E* has a twist of degree 3 or 6 if and only if *a* = 0. The equation of *E*′ is then *y*<sup>2</sup> = *x*<sup>3</sup> + *<sup>b</sup> <sup>D</sup>* , where *D* is not a residue of degree 3 (resp. 6), i.e. *D* is not solution of a polynomial *X*<sup>3</sup> −*D* (resp. *X*<sup>6</sup> −*D*). The morphism is then

$$\begin{array}{c} \Phi\_d: E' \to E \\ \Phi\_d(\mathbf{x}, \mathbf{y}) \to (\mathbf{x}D^{1/3}, \mathbf{y}D^{1/2}). \end{array}$$

Considering the definition above, an elliptic curve can admit a twist of degree 2, 3, 4 or 6. We will only consider here the twisted elliptic curve for an even degree. In order to simplify the notations, we will consider a twist of degree 2. The same method can be applied for twists of degree 4 and 6. The case of twist of degree 3 is a little different, but can also be considered, we refer to [31] for more details. Using a twisted elliptic curve of *E*(**F***pk* ) allows to make some computation of the Miller's algorithm in a subfield of **F***pk* , instead of **F***pk* and thus allows to simplify the computation. Using a twisted elliptic curve is the solution to avoid the denominators in the Miller's algorithm (i.e. the update of the function *<sup>f</sup>*2). We will denote *<sup>E</sup>*(**F***pk*/<sup>2</sup> ) the twisted curve of *<sup>E</sup>*(**F***pk* ), for an even *<sup>k</sup>*. We could remark that the twisted elliptic curve of *E* is an elliptic curve define over an extension of degree half of the initial extension (**F***pk* ) [11]. Let <sup>ν</sup> <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> a non square element in **<sup>F</sup>***pk*/<sup>2</sup> , then <sup>√</sup><sup>ν</sup> is an element of **<sup>F</sup>***pk* \**F***pk*/<sup>2</sup> . We can define *<sup>E</sup>* the twisted elliptic curve of *<sup>E</sup>*(**F***pk* ) of equation <sup>ν</sup>*y*<sup>2</sup> <sup>=</sup> *<sup>x</sup>*<sup>3</sup> <sup>−</sup>3*<sup>x</sup>* <sup>+</sup> *<sup>b</sup>*. The morphism mapping *<sup>E</sup>*(**F***pk*/<sup>2</sup> ) to *<sup>E</sup>*(**F***pk* ) is <sup>Ψ</sup><sup>2</sup> define by

$$\begin{array}{c} \mathbf{\varPsi}\_{2} : \tilde{E}(\mathbb{F}\_{p^{k/2}}) \to E(\mathbb{F}\_{p^{k}})\\ (\mathsf{x}, \mathsf{y}) \to (\mathsf{x}, \mathsf{y}\sqrt{\mathsf{V}}) .\end{array}$$

The probability that the point *Q* = (*x*,*y* <sup>√</sup>ν) image of *<sup>Q</sup>*′ = (*x*,*y*) <sup>∈</sup> *<sup>E</sup>* by <sup>Ψ</sup><sup>2</sup> belongs to the subgroup generate by *P* ∈ *E*(**F***p*) is negligeable [11]. This assures us that the pairing is non degenerated between *<sup>P</sup>* <sup>∈</sup> *<sup>E</sup>*(**F***p*) and *<sup>Q</sup>* <sup>=</sup> <sup>Ψ</sup>2(*Q*′ ). As a consequence, we can consider that the coordinates of the point *Q* are element of **<sup>F</sup>***pk*/<sup>2</sup> plus a multiplication by <sup>√</sup>ν.

We give the formulae for Miller's algorithm with the use of a twisted elliptic curve. Let *A*, *B*,*C*, *D*, *E* and *F* be the intermediate values in the doubling and addition of a point over *E* (in Jacobian coordinates). These values are dependant only on the point *P* = (*XP*;*YP*;*ZP*) and multiples of *P*: *T* = (*XT* ;*YT* ;*ZT* ); 2*T* = (*X*2*<sup>T</sup>* ;*Y*2*<sup>T</sup>* ;*Z*2*<sup>T</sup>* ) and *T* +*P* = (*X*3;*Y*3;*Z*3). The equations of functions *l*1, *l*2, *v*<sup>1</sup> and *v*<sup>2</sup> are

$$\begin{array}{l} l\_{1}(\mathbf{x}\_{Q},\mathbf{y}\_{Q}\sqrt{\mathbf{v}}) = \mathbf{Z}\_{P}^{2}(\mathbf{Z}\_{2T}\mathbf{D}\mathbf{y}\_{Q}\sqrt{\mathbf{v}} - \mathbf{B}(\mathbf{D}\mathbf{x}\_{Q} - \mathbf{X}\_{T}) - 2\mathbf{Y}\_{T}),\\ \mathbf{v}\_{1}(\mathbf{x}\_{Q},\mathbf{y}\_{Q}\sqrt{\mathbf{v}}) = \mathbf{Z}\_{2T}^{2}\mathbf{Z}\_{P}\mathbf{x}\_{Q} + 4Y^{2}(\mathbf{X}\_{P}\mathbf{D} + \mathbf{X}\_{T}\mathbf{Z}\_{P}^{2}) - 9\mathbf{Z}\_{P}^{2}(\mathbf{X}\_{T}^{2} - \mathbf{Z}\_{T}^{4})^{2},\\ l\_{2}(\mathbf{x}\_{Q},\mathbf{y}\_{Q}\sqrt{\mathbf{v}}) = \mathbf{Z}\_{T+P}^{2}(\mathbf{Z}\_{T}^{3}\mathbf{E}\mathbf{y}\_{Q}\sqrt{\mathbf{v}} - \mathbf{Z}\_{T}F(\mathbf{Z}\_{T}^{2}\mathbf{x}\_{Q}) - \mathbf{Y}\_{T}\mathbf{E}),\\ \mathbf{v}\_{2}(\mathbf{x}\_{Q},\mathbf{y}\_{Q}\sqrt{\mathbf{v}}) = \mathbf{Z}\_{T}^{3}E(\mathbf{Z}\_{3}^{3}\mathbf{x}\_{Q} + E(A + B) - \mathbf{Z}\_{T}^{2}\mathbf{Z}\_{P}^{2}F). \end{array} \tag{5}$$

10.5772/56295

61

http://dx.doi.org/10.5772/56295

In order to illustrate the simplification of the computation with the use of a pairing, we compare two computations of the doubling step in Miller's algorithm. The Miller Lite execution is the computation of the Miller's algorithm for the Tate pairing (*Miller*(*P*,*Q*)). The Miller full execution is the computation of *Miller*(*Q*,*P*). The Table 2 compare the cost of the doubling step in Miller Lite and Miller Full with

Efficient Computation for Pairing Based Cryptography: A State of the Art

Lite 8*Sp* + (12+4*k*)*Mp* +2*Spk* +2*Mpk* 4*Sp* + (7+*k*)*Mp* +*Spk* + *Mpk* Full 3*kMp* +10*Spk* +14*Mpk kMp* +5*Spk* +7*Mpk*

The computation of pairings implies computations over extension fields of the form **F***pk* . If the embedding degree *k* is smooth, than the arithmetic in **F***pk* can be computed step by step. A complete an extensive nice definition of smooth number is given in [50], we recall here an intuitive naive definition. Definition 4.5. A *smooth integer* is an integer such that its prime factor are composed only by small

*Example* 4.7. Let *l* be a prime number and *m* an integer such that *k* = *lm*. The extension **F***pk* of **F***<sup>p</sup>* can be constructed like an extension of degree *l* of **F***pm* . We suppose that we have already constructed the extension **<sup>F</sup>***pm* . Let *<sup>P</sup>*(*X*) be an irreducible polynomial of degree *<sup>l</sup>* in **<sup>F</sup>***pm* [*X*]. Then **<sup>F</sup>***plm* = **<sup>F</sup>**(*pm*)*<sup>l</sup>*

**<sup>F</sup>***plm* = **<sup>F</sup>***pm* [*X*]/(*P*(*X*)).

We use the tower field construction in order to optimize the multiplication over **F***pk* . We will see in Section 5 that for extensions of degree 2 and 3, we can use the Karatsuba and Toom Cook multiplications. The tower field construction reduce the number of elementary operations over **F***<sup>p</sup>*

A.Menezes and N.Koblitz [48] proposed the definition of pairing friendly elliptic curves. There are elliptic curves suitable for pairing computation. Pairing friendly fields are defined with *k* smooth.

Definition 4.8. A pairing friendly field **F***pk* is an extension of a finite field **F***<sup>p</sup>* with the following

Pairing friendly field are such that the polynomial reduction over the extension **F***pk* is very easy to

3*j* .

3*<sup>j</sup>* is smooth.

We illustrate how a smooth integer *k* allows a construction of **F***pk* with a tower field.

Miller Without twist With twist

and without the use of twisted elliptic curve.

4.2. Pairing friendly fields and elliptic curves

**Table 2.** Cost of Miller Lite and Miller Full

*Example* 4.6. An integer of the form 2*<sup>i</sup>*

to compute a multiplication in **F***pk* [35].

compute [50, Theorem 3.75].

• the characteristic *p* is such that *p* ≡ 1 *mod*(12), • the embedding degree *k* is such that *k* = 2*<sup>i</sup>*

is constructed with the quotient

primes.

property

The multiplications and additions in these formulae are made in **<sup>F</sup>***<sup>p</sup>* and **<sup>F</sup>***pk*/<sup>2</sup> . For *xQ* <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> , if we consider carefully the equations of *v*<sup>1</sup> and *v*2, we can remark that the results *v*1(*xQ*,*yQ* <sup>√</sup>ν) and *v*2(*xQ*,*yQ* <sup>√</sup>ν) are elements of **<sup>F</sup>***pk*/<sup>2</sup> . Indeed, the *<sup>y</sup>*-coordinate of *<sup>Q</sup>* does not appear in the denominator *<sup>v</sup>*<sup>1</sup> and consequently <sup>√</sup><sup>ν</sup> either. This simple remark allows the elimination of the denominators during the Tate pairing computation.

Property 4.3. *During the evaluation of Miller's algorithm for the Tate pairing, the evaluation of f*<sup>2</sup> *and thus the computations of v*<sup>1</sup> *and v*<sup>2</sup> *can be omited [11].*

Indeed, when using a twist, the equation shows that *<sup>v</sup>*1(*Q*), *<sup>v</sup>*2(*Q*) <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> and then *<sup>f</sup>*<sup>2</sup> <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> . By definition of the embedding degree *<sup>k</sup>* of the elliptic curve, *<sup>p</sup>k*−<sup>1</sup> *<sup>r</sup>* is a multiple of *<sup>p</sup>k*/<sup>2</sup> <sup>−</sup>1 and *<sup>f</sup> pk*−1 *r* <sup>2</sup> = 1 by the following proposition.

Property 4.4. *Let r be a prime divisor of* #*E*(**F***p*) *and E be an elliptic curve of embedding degree k relatively to r. Then <sup>p</sup>k*−<sup>1</sup> *<sup>r</sup> is a multiple of pk*/<sup>2</sup> <sup>−</sup>1*.*

*Proof.* The demonstration is a straight forward consequence of the construction of *k* as the smallest integer such that *r* divides *pk* −1. So for an even *k*, *p<sup>k</sup>* −1 = (*pk*/<sup>2</sup> −1)(*pk*/<sup>2</sup> +1) and *r* a prime integer divides *pk* −1. Using the Gauss theorem, *r* divides (*pk*/<sup>2</sup> −1) or (*pk*/<sup>2</sup> + 1). If *r* divides (*pk*/<sup>2</sup> −1), then the definition of *k* would be wrong, thus the only possibility is that *r* divides (*pk*/<sup>2</sup> +1).

For all <sup>ξ</sup> <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> , we know that <sup>ξ</sup> *<sup>p</sup>k*/2−<sup>1</sup> <sup>≡</sup> 1 (from the Little Fermat's theorem). Consequently the final exponentiation of the Tate pairing kills every factor of the result belonging to a proper subfield of **F***pk* . The Miller's computation can be simplified by forgetting *v*<sup>1</sup> and *v*2. But with the same remark, we can also simplify the function *l*<sup>1</sup> and *l*<sup>2</sup> into

$$\begin{array}{l} l\_1(\mathbf{x}\_Q, \mathbf{y}\_Q \sqrt{\mathbf{V}}) = \mathbf{Z}\_{2T} \mathbf{D} \mathbf{y}\_Q \sqrt{\mathbf{v}} - \mathbf{B}(\mathbf{D} \mathbf{x}\_Q - \mathbf{X}\_T) - 2\mathbf{Y}\_T, \\ l\_2(\mathbf{x}\_Q, \mathbf{y}\_Q \sqrt{\mathbf{V}}) = \mathbf{Z}\_T^3 \mathbf{E} \mathbf{y}\_Q \sqrt{\mathbf{V}} - \mathbf{Z}\_T \mathbf{F}(\mathbf{Z}\_T^2 \mathbf{x}\_Q) - \mathbf{Y}\_T \mathbf{E}. \end{array} \tag{6}$$

This method can be applied for every pairing with a final exponentiation. In the case of the Weil pairing, we can also apply it by raising the result of Weil pairing at the power *pk*/<sup>2</sup> −1. The cost of this exponentiation will be study in Section 4.4.

In order to illustrate the simplification of the computation with the use of a pairing, we compare two computations of the doubling step in Miller's algorithm. The Miller Lite execution is the computation of the Miller's algorithm for the Tate pairing (*Miller*(*P*,*Q*)). The Miller full execution is the computation of *Miller*(*Q*,*P*). The Table 2 compare the cost of the doubling step in Miller Lite and Miller Full with and without the use of twisted elliptic curve.


**Table 2.** Cost of Miller Lite and Miller Full

10 Theory and Practice of Cryptography and Network Security Protocols and Technologies

). As a consequence, we can consider that the coordinates of the point *Q*

<sup>√</sup><sup>ν</sup> <sup>−</sup>*B*(*DxQ* <sup>−</sup>*XT* )−2*YT* ),

*P*)−9*Z*<sup>2</sup>

*<sup>T</sup> <sup>Z</sup>*<sup>2</sup> *<sup>P</sup>F*).

*<sup>T</sup> xQ*)−*YT <sup>E</sup>*),

*P*(*X*<sup>2</sup> *<sup>T</sup>* <sup>−</sup>*Z*<sup>4</sup> *<sup>T</sup>* )2,

*<sup>r</sup>* is a multiple of *<sup>p</sup>k*/<sup>2</sup> <sup>−</sup>1 and *<sup>f</sup>*

*<sup>T</sup> xQ*)−*YT <sup>E</sup>*. (6)

(5)

<sup>√</sup>ν) and

*pk*−1 *r* <sup>2</sup> = 1

We give the formulae for Miller's algorithm with the use of a twisted elliptic curve. Let *A*, *B*,*C*, *D*, *E* and *F* be the intermediate values in the doubling and addition of a point over *E* (in Jacobian coordinates). These values are dependant only on the point *P* = (*XP*;*YP*;*ZP*) and multiples of *P*: *T* = (*XT* ;*YT* ;*ZT* ); 2*T* = (*X*2*<sup>T</sup>* ;*Y*2*<sup>T</sup>* ;*Z*2*<sup>T</sup>* ) and *T* +*P* = (*X*3;*Y*3;*Z*3). The equations of functions *l*1, *l*2, *v*<sup>1</sup> and *v*<sup>2</sup> are

<sup>2</sup>*<sup>T</sup> ZPxQ* +4*Y*2(*XPD*+*XT <sup>Z</sup>*<sup>2</sup>

The multiplications and additions in these formulae are made in **<sup>F</sup>***<sup>p</sup>* and **<sup>F</sup>***pk*/<sup>2</sup> . For *xQ* <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> , if

*<sup>v</sup>*<sup>1</sup> and consequently <sup>√</sup><sup>ν</sup> either. This simple remark allows the elimination of the denominators during

Property 4.3. *During the evaluation of Miller's algorithm for the Tate pairing, the evaluation of f*<sup>2</sup>

Indeed, when using a twist, the equation shows that *<sup>v</sup>*1(*Q*), *<sup>v</sup>*2(*Q*) <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> and then *<sup>f</sup>*<sup>2</sup> <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> . By

Property 4.4. *Let r be a prime divisor of* #*E*(**F***p*) *and E be an elliptic curve of embedding degree k*

*Proof.* The demonstration is a straight forward consequence of the construction of *k* as the smallest integer such that *r* divides *pk* −1. So for an even *k*, *p<sup>k</sup>* −1 = (*pk*/<sup>2</sup> −1)(*pk*/<sup>2</sup> +1) and *r* a prime integer divides *pk* −1. Using the Gauss theorem, *r* divides (*pk*/<sup>2</sup> −1) or (*pk*/<sup>2</sup> + 1). If *r* divides (*pk*/<sup>2</sup> −1),

For all <sup>ξ</sup> <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> , we know that <sup>ξ</sup> *<sup>p</sup>k*/2−<sup>1</sup> <sup>≡</sup> 1 (from the Little Fermat's theorem). Consequently the final exponentiation of the Tate pairing kills every factor of the result belonging to a proper subfield of **F***pk* . The Miller's computation can be simplified by forgetting *v*<sup>1</sup> and *v*2. But with the same remark, we can

This method can be applied for every pairing with a final exponentiation. In the case of the Weil pairing, we can also apply it by raising the result of Weil pairing at the power *pk*/<sup>2</sup> −1. The cost of this

<sup>√</sup><sup>ν</sup> <sup>−</sup>*B*(*DxQ* <sup>−</sup>*XT* )−2*YT* ,

<sup>√</sup><sup>ν</sup> <sup>−</sup>*ZT <sup>F</sup>*(*Z*<sup>2</sup>

then the definition of *k* would be wrong, thus the only possibility is that *r* divides (*pk*/<sup>2</sup> +1).

<sup>√</sup>ν) = *<sup>Z</sup>*2*<sup>T</sup> DyQ*

*<sup>T</sup> EyQ*

<sup>√</sup>ν) = *<sup>Z</sup>*<sup>3</sup>

<sup>3</sup> *xQ* <sup>+</sup>*E*(*A*+*B*)−*Z*<sup>2</sup>

<sup>√</sup><sup>ν</sup> <sup>−</sup>*ZT <sup>F</sup>*(*Z*<sup>2</sup>

<sup>√</sup>ν) are elements of **<sup>F</sup>***pk*/<sup>2</sup> . Indeed, the *<sup>y</sup>*-coordinate of *<sup>Q</sup>* does not appear in the denominator

*<sup>T</sup> EyQ*

we consider carefully the equations of *v*<sup>1</sup> and *v*2, we can remark that the results *v*1(*xQ*,*yQ*

*<sup>P</sup>* <sup>∈</sup> *<sup>E</sup>*(**F***p*) and *<sup>Q</sup>* <sup>=</sup> <sup>Ψ</sup>2(*Q*′

*v*2(*xQ*,*yQ*

are element of **<sup>F</sup>***pk*/<sup>2</sup> plus a multiplication by <sup>√</sup>ν.

*l*1(*xQ*,*yQ*

*v*1(*xQ*,*yQ*

*l*2(*xQ*,*yQ*

*v*2(*xQ*,*yQ*

the Tate pairing computation.

by the following proposition.

also simplify the function *l*<sup>1</sup> and *l*<sup>2</sup> into

exponentiation will be study in Section 4.4.

*l*1(*xQ*,*yQ*

*l*2(*xQ*,*yQ*

*relatively to r. Then <sup>p</sup>k*−<sup>1</sup>

<sup>√</sup>ν) = *<sup>Z</sup>*<sup>2</sup>

<sup>√</sup>ν) = *<sup>Z</sup>*<sup>2</sup>

<sup>√</sup>ν) = *<sup>Z</sup>*<sup>2</sup>

<sup>√</sup>ν) = *<sup>Z</sup>*<sup>3</sup>

*and thus the computations of v*<sup>1</sup> *and v*<sup>2</sup> *can be omited [11].*

definition of the embedding degree *<sup>k</sup>* of the elliptic curve, *<sup>p</sup>k*−<sup>1</sup>

*<sup>r</sup> is a multiple of pk*/<sup>2</sup> <sup>−</sup>1*.*

*<sup>P</sup>*(*Z*2*<sup>T</sup> DyQ*

*T*+*P*(*Z*<sup>3</sup>

*<sup>T</sup> <sup>E</sup>*(*Z*<sup>3</sup>

#### 4.2. Pairing friendly fields and elliptic curves

The computation of pairings implies computations over extension fields of the form **F***pk* . If the embedding degree *k* is smooth, than the arithmetic in **F***pk* can be computed step by step. A complete an extensive nice definition of smooth number is given in [50], we recall here an intuitive naive definition.

Definition 4.5. A *smooth integer* is an integer such that its prime factor are composed only by small primes.

*Example* 4.6. An integer of the form 2*<sup>i</sup>* 3*<sup>j</sup>* is smooth.

We illustrate how a smooth integer *k* allows a construction of **F***pk* with a tower field.

*Example* 4.7. Let *l* be a prime number and *m* an integer such that *k* = *lm*. The extension **F***pk* of **F***<sup>p</sup>* can be constructed like an extension of degree *l* of **F***pm* . We suppose that we have already constructed the extension **<sup>F</sup>***pm* . Let *<sup>P</sup>*(*X*) be an irreducible polynomial of degree *<sup>l</sup>* in **<sup>F</sup>***pm* [*X*]. Then **<sup>F</sup>***plm* = **<sup>F</sup>**(*pm*)*<sup>l</sup>* is constructed with the quotient

$$\mathbb{F}\_{p^{\prime \ast}} = \mathbb{F}\_{p^{\prime \ast}}[X] / \left( P(X) \right) \dots$$

We use the tower field construction in order to optimize the multiplication over **F***pk* . We will see in Section 5 that for extensions of degree 2 and 3, we can use the Karatsuba and Toom Cook multiplications. The tower field construction reduce the number of elementary operations over **F***<sup>p</sup>* to compute a multiplication in **F***pk* [35].

A.Menezes and N.Koblitz [48] proposed the definition of pairing friendly elliptic curves. There are elliptic curves suitable for pairing computation. Pairing friendly fields are defined with *k* smooth.

Definition 4.8. A pairing friendly field **F***pk* is an extension of a finite field **F***<sup>p</sup>* with the following property


Pairing friendly field are such that the polynomial reduction over the extension **F***pk* is very easy to compute [50, Theorem 3.75].

Theorem 4.9. *Let* <sup>β</sup> <sup>∈</sup> **<sup>F</sup>***<sup>p</sup> be a neither a square nor a cube in* **<sup>F</sup>***<sup>p</sup> and* **<sup>F</sup>***pk a pairing friendly field with k* = 2*<sup>i</sup>* 3*j . Then the polynomial X<sup>k</sup>* −β *is irreducible in* **F***p.*

10.5772/56295

63

http://dx.doi.org/10.5772/56295

.

and <sup>α</sup>*pk*/<sup>6</sup>

*t v*, with

We are seeking for the general expression of an element in **<sup>G</sup>**φ*<sup>k</sup>* (*p*). We consider that <sup>α</sup> is a polynomial

As <sup>α</sup> belong to the cyclotomic subgroup **<sup>G</sup>**φ*<sup>k</sup>* (*p*), the order of <sup>α</sup> divides the cardinal of **<sup>G</sup>**φ*<sup>k</sup>* (*p*) which is <sup>φ</sup>*k*(*p*). So, we have that <sup>α</sup>*pk*/3−*pk*/<sup>6</sup>+<sup>1</sup> <sup>=</sup> 1 in **<sup>G</sup>**φ*<sup>k</sup>* (*p*). This equality can be written <sup>α</sup>*pk*/<sup>3</sup>+<sup>1</sup> <sup>=</sup> <sup>α</sup>*pk*/<sup>6</sup>

−α*pk*/<sup>6</sup>

−α*pk*/<sup>6</sup>

= *k*−1 ∑ *i*=0 *vi*γ*<sup>i</sup>* .

<sup>4</sup> +*a*3*a*5,

<sup>2</sup> <sup>−</sup>*a*1*a*<sup>3</sup> <sup>−</sup>*a*0*a*<sup>4</sup> +*a*3*a*<sup>4</sup> <sup>−</sup>2*a*2*a*5.

, we can then formally compute <sup>α</sup>*pk*/<sup>3</sup>

Efficient Computation for Pairing Based Cryptography: A State of the Art

<sup>3</sup> <sup>−</sup>*a*2*a*<sup>4</sup> <sup>−</sup>*a*1*a*5,

*vi*γ*<sup>i</sup>* = 0. With this equation, we construct a system in the α*i*, the

5,

<sup>4</sup> <sup>−</sup>*a*0*a*<sup>5</sup> +*a*3*a*5,

in several variables in **F***<sup>p</sup>* (the *ai*s), with coefficients power of γ in **F***pk* .

α ×α*pk*/<sup>3</sup>

<sup>1</sup> <sup>−</sup>*a*0*a*<sup>2</sup> <sup>−</sup>*a*<sup>4</sup> <sup>−</sup>*a*<sup>2</sup>

*v*<sup>3</sup> = −*a*<sup>1</sup> −*a*2*a*<sup>3</sup> +2*a*1*a*<sup>4</sup> −*a*<sup>2</sup>

resolution of this system will give us the general form of an element in **<sup>G</sup>**φ*<sup>k</sup>* (*p*).

*v*<sup>1</sup> = −*a*<sup>0</sup> +*a*1*a*<sup>2</sup> +*a*<sup>3</sup> −2*a*0*a*<sup>3</sup> +*a*<sup>2</sup>

*v*<sup>2</sup> = −*a*0*a*<sup>1</sup> +*a*3*a*<sup>4</sup> −*a*<sup>5</sup> −2*a*2*a*<sup>5</sup> +*a*<sup>2</sup>

<sup>0</sup> +*a*1*a*<sup>2</sup> +*a*<sup>3</sup> <sup>−</sup>2*a*0*a*<sup>3</sup> <sup>−</sup>*a*4*a*5,

The subgroup **<sup>G</sup>**φ*<sup>k</sup>* (*p*) is the set of elements <sup>α</sup> such that <sup>∀</sup>*i*, *vi* <sup>=</sup> 0, which gives <sup>α</sup><sup>2</sup> <sup>=</sup> <sup>α</sup><sup>2</sup> <sup>+</sup>B.Γ.

*k* ∑ *i*=1 *ai*γ*<sup>i</sup>*

We can formally develop the right expression and for a well chosen matrix Γ, the formulae for a square

)<sup>2</sup> + B.Γ*<sup>t</sup>*

.*v*.

<sup>B</sup> = (1, <sup>γ</sup>, <sup>γ</sup>2,..., <sup>γ</sup>*k*−1) and with <sup>Γ</sup> a chosen matrix. As *<sup>v</sup>* is zero in **<sup>F</sup>***p*, we can reduce the cost of a

In order to find the decomposition of <sup>α</sup> <sup>×</sup>α*pk*/<sup>3</sup>

*v*<sup>0</sup> = *a*<sup>2</sup>

*v*<sup>4</sup> = *a*<sup>2</sup>

As <sup>α</sup> <sup>∈</sup> **<sup>G</sup>**φ*<sup>k</sup>* (*p*), we have that

square with this method.

*k* ∑ *i*=1 *si*γ*<sup>i</sup>*

Denoting α<sup>2</sup> =

*v*<sup>5</sup> = −*a*<sup>2</sup> +*a*<sup>2</sup>

*k*−1 ∑ *i*=0

, we have the equality

in **F***pk* would be simplified. For instance, for *k* = 6[52] :

*k* ∑ *i*=1

*si*γ*<sup>i</sup>* = (

Where

Using the definition and the above property, we construct the extension **<sup>F</sup>***pk* = **<sup>F</sup>***p*[*X*]/(*X<sup>k</sup>* <sup>−</sup>β) using several extensions of degree 2 and 3. The construction is done step by step with square or cubic root of β and the results.

*Example* 4.10. Example of possible tower field for *k* = 2231 :

$$\mathbb{F}\_p \xrightarrow{2} L = \mathbb{F}\_p[T]/(T^2 - \beta),$$

$$K \xrightarrow{3} M = L[U]/(U^3 - T),$$

$$L \xrightarrow{2} N = M[V]/(V^2 - U).$$

The representation of fields *L*, *M* and *N* are as follow

$$\begin{aligned} L &= \{l\_0 + l\_1\beta, \text{ with } l\_0, l\_1 \in \mathbb{F}\_p\}, \\ M &= \{m\_0 + m\_1T + m\_2T^2, \text{ with } m\_0, m\_1, m\_2 \in L\}, \\ N &= \{n\_0 + n\_1U, \text{ with } n\_0, n\_1 \in M\}. \end{aligned}$$

The arithmetic in **F***pk* can be composed in each floor of the tower field construction. As *k* is a product of power of 2 and 3, the Karatsuba and Toom Cook methods are the more suitable for improving the multiplication in **F***pk* . We consider that a multiplication in **F***pk* with *k* = 2*<sup>i</sup>* 3*<sup>j</sup>* involves 3*<sup>i</sup>* 5*j* multiplications in **F***p*, which is denoted *Mpk* = 3*<sup>i</sup>* 5*j Mp*.

#### 4.3. Cyclotomic subgroup and squaring

A. Lenstra and M. Stam introduce in [52] an efficient method for squaring. They use the structure of a cyclotomic subgroup. They construct an extension of degree 6 with a polynomial different from *<sup>X</sup>*<sup>6</sup> <sup>−</sup> <sup>β</sup>. The cyclotomic subgroup **<sup>G</sup>**φ*<sup>k</sup>* (*p*) is the subgroup of order <sup>φ</sup>*k*(*p*) of **<sup>F</sup>**<sup>⋆</sup> *pk* , where <sup>φ</sup>*k*(*p*) is the *k*th cyclotomic polynomial evaluated at *p*. The cyclotomic polynomials are constructed such that there roots are the primitive roots of unity.

The multiplication developed by Lenstra and Stam is interesting for computing squares in degree 6 extension of **F***<sup>p</sup>* (or a degree multiple of 6). It could be interesting to generalize it for other degree extension. They construct the degree 6 extension using the cyclotomic polynomial <sup>φ</sup>*k*(*X*) = *<sup>X</sup>k*/<sup>3</sup> <sup>−</sup> *Xk*/<sup>6</sup> +1. This method can be used for every degree extension multiple of 6.

$$\text{Let } \mathfrak{a} \in \mathbb{G}\_{\mathfrak{g}\_l(p)}, \mathfrak{a} = \sum\_{l=0}^{k-1} a\_l \mathfrak{p}^l, \text{ where for all } i, a\_l \in \mathbb{F}\_p \text{ and } \mathcal{B} = (1, \mathfrak{p}, \mathfrak{p}^2, \dots, \mathfrak{p}^{k-1}) \text{ is a basis of } \mathbb{F}\_{p^k}.$$

We are seeking for the general expression of an element in **<sup>G</sup>**φ*<sup>k</sup>* (*p*). We consider that <sup>α</sup> is a polynomial in several variables in **F***<sup>p</sup>* (the *ai*s), with coefficients power of γ in **F***pk* .

As <sup>α</sup> belong to the cyclotomic subgroup **<sup>G</sup>**φ*<sup>k</sup>* (*p*), the order of <sup>α</sup> divides the cardinal of **<sup>G</sup>**φ*<sup>k</sup>* (*p*) which is <sup>φ</sup>*k*(*p*). So, we have that <sup>α</sup>*pk*/3−*pk*/<sup>6</sup>+<sup>1</sup> <sup>=</sup> 1 in **<sup>G</sup>**φ*<sup>k</sup>* (*p*). This equality can be written <sup>α</sup>*pk*/<sup>3</sup>+<sup>1</sup> <sup>=</sup> <sup>α</sup>*pk*/<sup>6</sup> .

In order to find the decomposition of <sup>α</sup> <sup>×</sup>α*pk*/<sup>3</sup> −α*pk*/<sup>6</sup> , we can then formally compute <sup>α</sup>*pk*/<sup>3</sup> and <sup>α</sup>*pk*/<sup>6</sup>

$$
\alpha \times \alpha^{p^{k/3}} - \alpha^{p^{k/6}} = \sum\_{i=0}^{k-1} \nu\_i \gamma^i.
$$

Where

12 Theory and Practice of Cryptography and Network Security Protocols and Technologies

**F***p* 2

*K* <sup>3</sup>

*L* <sup>2</sup>

*L* = {*l*<sup>0</sup> +*l*1β, with *l*0,*l*<sup>1</sup> ∈ **F***p*},

*N* = {*n*<sup>0</sup> +*n*1*U*, with *n*0,*n*<sup>1</sup> ∈ *M*}.

the multiplication in **F***pk* . We consider that a multiplication in **F***pk* with *k* = 2*<sup>i</sup>*

*<sup>X</sup>*<sup>6</sup> <sup>−</sup> <sup>β</sup>. The cyclotomic subgroup **<sup>G</sup>**φ*<sup>k</sup>* (*p*) is the subgroup of order <sup>φ</sup>*k*(*p*) of **<sup>F</sup>**<sup>⋆</sup>

*Xk*/<sup>6</sup> +1. This method can be used for every degree extension multiple of 6.

*. Then the polynomial X<sup>k</sup>* −β *is irreducible in* **F***p.*

*Example* 4.10. Example of possible tower field for *k* = 2231 :

The representation of fields *L*, *M* and *N* are as follow

multiplications in **F***p*, which is denoted *Mpk* = 3*<sup>i</sup>*

4.3. Cyclotomic subgroup and squaring

*k*−1 ∑ *i*=0 *ai*γ*<sup>i</sup>*

roots are the primitive roots of unity.

Let <sup>α</sup> <sup>∈</sup> **<sup>G</sup>**φ*<sup>k</sup>* (*p*), <sup>α</sup> <sup>=</sup>

*k* = 2*<sup>i</sup>* 3*j*

β and the results.

Theorem 4.9. *Let* <sup>β</sup> <sup>∈</sup> **<sup>F</sup>***<sup>p</sup> be a neither a square nor a cube in* **<sup>F</sup>***<sup>p</sup> and* **<sup>F</sup>***pk a pairing friendly field with*

Using the definition and the above property, we construct the extension **<sup>F</sup>***pk* = **<sup>F</sup>***p*[*X*]/(*X<sup>k</sup>* <sup>−</sup>β) using several extensions of degree 2 and 3. The construction is done step by step with square or cubic root of

→ *L* = **F***p*[*T* ]/(*T*<sup>2</sup> −β),

→ *M* = *L*[*U*]/(*U*<sup>3</sup> −*T* ),

→ *N* = *M*[*V*]/(*V*<sup>2</sup> −*U*).

*M* = {*m*<sup>0</sup> +*m*1*T* +*m*2*T*2, with *m*0,*m*1,*m*<sup>2</sup> ∈ *L*},

The arithmetic in **F***pk* can be composed in each floor of the tower field construction. As *k* is a product of power of 2 and 3, the Karatsuba and Toom Cook methods are the more suitable for improving

> 5*j Mp*.

A. Lenstra and M. Stam introduce in [52] an efficient method for squaring. They use the structure of a cyclotomic subgroup. They construct an extension of degree 6 with a polynomial different from

*k*th cyclotomic polynomial evaluated at *p*. The cyclotomic polynomials are constructed such that there

The multiplication developed by Lenstra and Stam is interesting for computing squares in degree 6 extension of **F***<sup>p</sup>* (or a degree multiple of 6). It could be interesting to generalize it for other degree extension. They construct the degree 6 extension using the cyclotomic polynomial <sup>φ</sup>*k*(*X*) = *<sup>X</sup>k*/<sup>3</sup> <sup>−</sup>

, where for all *<sup>i</sup>*, *ai* <sup>∈</sup> **<sup>F</sup>***<sup>p</sup>* and <sup>B</sup> = (1, <sup>γ</sup>, <sup>γ</sup>2,..., <sup>γ</sup>*k*−1) is a basis of **<sup>F</sup>***pk* .

3*<sup>j</sup>* involves 3*<sup>i</sup>*

*pk* , where <sup>φ</sup>*k*(*p*) is the

5*j*

$$\begin{aligned} \nu\_0 &= a\_1^2 - a\_0 a\_2 - a\_4 - a\_4^2 + a\_3 a\_5, \\ \nu\_1 &= -a\_0 + a\_1 a\_2 + a\_3 - 2a\_0 a\_3 + a\_3^2 - a\_2 a\_4 - a\_1 a\_5, \\ \nu\_2 &= -a\_0 a\_1 + a\_3 a\_4 - a\_5 - 2a\_2 a\_5 + a\_5^2, \\ \nu\_3 &= -a\_1 - a\_2 a\_3 + 2a\_1 a\_4 - a\_4^2 - a\_0 a\_5 + a\_3 a\_5, \\ \nu\_4 &= a\_0^2 + a\_1 a\_2 + a\_3 - 2a\_0 a\_3 - a\_4 a\_5, \\ \nu\_5 &= -a\_2 + a\_2^2 - a\_1 a\_3 - a\_0 a\_4 + a\_3 a\_4 - 2a\_2 a\_5. \end{aligned}$$

As <sup>α</sup> <sup>∈</sup> **<sup>G</sup>**φ*<sup>k</sup>* (*p*), we have that *k*−1 ∑ *i*=0 *vi*γ*<sup>i</sup>* = 0. With this equation, we construct a system in the α*i*, the resolution of this system will give us the general form of an element in **<sup>G</sup>**φ*<sup>k</sup>* (*p*).

The subgroup **<sup>G</sup>**φ*<sup>k</sup>* (*p*) is the set of elements <sup>α</sup> such that <sup>∀</sup>*i*, *vi* <sup>=</sup> 0, which gives <sup>α</sup><sup>2</sup> <sup>=</sup> <sup>α</sup><sup>2</sup> <sup>+</sup>B.Γ. *t v*, with <sup>B</sup> = (1, <sup>γ</sup>, <sup>γ</sup>2,..., <sup>γ</sup>*k*−1) and with <sup>Γ</sup> a chosen matrix. As *<sup>v</sup>* is zero in **<sup>F</sup>***p*, we can reduce the cost of a square with this method.

Denoting α<sup>2</sup> = *k* ∑ *i*=1 *si*γ*<sup>i</sup>* , we have the equality

$$\sum\_{i=1}^k s\_i \boldsymbol{\gamma}^i = (\sum\_{i=1}^k a\_i \boldsymbol{\gamma}^i)^2 + \mathcal{R} \boldsymbol{\Gamma}^t \boldsymbol{\nu}.$$

We can formally develop the right expression and for a well chosen matrix Γ, the formulae for a square in **F***pk* would be simplified. For instance, for *k* = 6[52] :

$$\alpha^2 = \mathcal{R} \cdot \begin{pmatrix} 2a\_1 + 3a\_4(a\_4 - 2a\_1) \\ 2a\_0 + 3(a\_0 + a\_3)(a\_0 - a\_3) \\ -2a\_5 + 3a\_5(a\_5 - 2a\_2) \\ 2(a\_2 - a\_4) + 3a\_1(a\_1 - 2a\_4) \\ 2(a\_0 - a\_3) + 3a\_3(2a\_0 - a\_3) \\ -2a\_2 + 3a\_2(a\_2 - 2a\_5) \end{pmatrix} . \tag{7}$$

As (*X* +*Y*

Let (*X*′ +*Y*′

by *k*, we know that

following step

β.

The computation of (*X*′ +*Y*′

We have to compute the *<sup>k</sup>*

(*X*′ +*Y*′

<sup>√</sup>ν)−<sup>1</sup> = (*<sup>X</sup>* <sup>+</sup>*<sup>Y</sup>*

We then have to compute (*X*′ +*Y*′

The property of a finite field gives *a<sup>p</sup>* =

<sup>√</sup>ν)*pk*/<sup>2</sup>

<sup>√</sup>ν) be the result of (*<sup>X</sup>* <sup>+</sup>*<sup>Y</sup>*

(*X* +*Y*

, we have that

<sup>√</sup>ν)*pk*/2−<sup>1</sup> = (*<sup>X</sup>* <sup>+</sup>*<sup>Y</sup>*

Raising an element of **<sup>F</sup>***pk* to a power *<sup>p</sup>k*/<sup>2</sup> is a Frobenius operation, which mainly consists in shifts. The total cost of the exponentiation to the power (*pk*/<sup>2</sup> <sup>−</sup>1) is a square in **<sup>F</sup>***pk* and a Frobenius application.

*ai*γ*ip* and recursively

*ai*γ*ip<sup>j</sup>* .

γ*ri j* .

<sup>+</sup>*Y*′*pk*/6√ν(*pk*/<sup>6</sup>)

(*xi*β*qi*(*k*/6) mod (*p*))γ*ri*(*k*/6) .

)×(*X*′ +*Y*′

√ν)

), with *xi* and <sup>β</sup>*qi*(*k*/6) mod(*p*) in **<sup>F</sup>***p*. The total

For *i* and *j* two integers let *qi j* and *ri j* be the quotient and the remainder of the Euclidien division of *ip<sup>j</sup>*

= β*qi j* mod(*p*)

For example, if we describe what happened for the variable *X*′ raised to the power *pk*/6, we obtain the

<sup>√</sup>ν)*pk*/2<sup>−</sup>1.

Let <sup>γ</sup> be a root of *<sup>X</sup><sup>k</sup>* <sup>−</sup><sup>β</sup> in **<sup>F</sup>***pk* . An element *<sup>a</sup>* of **<sup>F</sup>***pk* can be decomposed in *<sup>a</sup>* =

*k*−1 ∑ *i*=0

> *apj* = *k*−1 ∑ *i*=0

γ*ip<sup>j</sup>*

<sup>√</sup>ν)*pk*/<sup>6</sup>+<sup>1</sup> = (*X*′*pk*/<sup>6</sup>

*X*′ =

*<sup>X</sup>*′*pk*/<sup>6</sup> =

*<sup>X</sup>*′*pk*/<sup>6</sup> = *k*/2−1 ∑ *i*=0

*k*/2−1 ∑ *i*=0

*k*/2−1 ∑ *i*=0

<sup>2</sup> products (*xi*β*qi*(*k*/6) mod(*p*)

*xi*γ*<sup>i</sup>* ,

*xi*γ*ip*(*k*/6) ,

complexity of the first part of the exponentiation is 2*kMp* +*Spk* +*Mpk* plus shifts and multiplications by

<sup>√</sup>ν)*pk*/<sup>6</sup>+<sup>1</sup> can be decomposed in

<sup>√</sup>ν)2*pk*/<sup>2</sup> .

Efficient Computation for Pairing Based Cryptography: A State of the Art

<sup>√</sup>ν)*pk*/<sup>6</sup>+<sup>1</sup> which is another application of the Frobenius.

*k*−1 ∑ *i*=0 *ai*γ*<sup>i</sup>*

10.5772/56295

65

http://dx.doi.org/10.5772/56295

, with *ai* ∈ **F***p*.

Granger, Page and Smart apply this method to construct the Table 3 [41].


**Table 3.** Complexity of a square in **<sup>F</sup>***pk*

In the particular case where *k* = 6 and *p* ≡ 2 (mod 9), the cost of a square with the Lenstra and Stam method is less than 0, 75*Mpk* , which is usually the ratio of a square compare to a multiplication.

*Example* 4.11. In **F***p*<sup>6</sup> , a square with Lenstra and Stam method cost 6 × 0, 75*Mp* ≈ 4, 5*Mp*. With the classical ratio, a square in **F***p*<sup>6</sup> costs 15×0, 75*Mp* ≈ 10*Mp*.

#### 4.4. The finale exponentiation

The Tate pairing (and also the Ate, optimal Ate) is composed of two steps, first the Miller's execution and then a final exponentiation. This exponentiation is a very expensive operation as it takes place in **<sup>F</sup>***pk* and the exponent *pk*−<sup>1</sup> *<sup>r</sup>* is a large integer. In order to simplify this exponentiation it is split in two parts [48] using the fact that:

$$\frac{(p^k - 1)}{r} = \frac{(p^k - 1)}{\phi\_k(p)} \times \frac{\phi\_k(p)}{r}.$$

where φ*k*(*p*) is the evaluation in *p* of the *k*-th cyclotomic polynomial.

The first part of the exponentiation uses the twisted elliptic curve and it is equivalent to computing the Frobenius map of elements in **F***pk* . The second part is a reduced exponentiation in **F***pk* which is performed with classical method for exponentiation.

#### *4.4.1. First part of the exponentiation*

We consider here the exponentiation to the power *<sup>p</sup>k*−<sup>1</sup> <sup>φ</sup>*<sup>k</sup>* (*p*). We can first remark that if *<sup>k</sup>* <sup>=</sup> <sup>2</sup>*<sup>i</sup>* 3*j* , then <sup>φ</sup>*k*(*p*) = *pk*/<sup>3</sup> <sup>−</sup> *pk*/<sup>6</sup> <sup>+</sup> 1 and *pk*−<sup>1</sup> <sup>φ</sup>*<sup>k</sup>* (*p*) = (*pk*/<sup>2</sup> <sup>−</sup> <sup>1</sup>)(*pk*/<sup>6</sup> <sup>+</sup> <sup>1</sup>). Using a twist, the result of Miller's algorithm is something like (*X* +*Y* <sup>√</sup>ν) avec *<sup>X</sup>*,*<sup>Y</sup>* <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> .

The computation of (*X* +*Y* <sup>√</sup>ν)*pk*/2−<sup>1</sup> can be decomposed in

$$(X+Y\sqrt{\mathbf{v}})^{p^{k/2}} \times (X+Y\sqrt{\mathbf{v}})^{-1}.$$

As (*X* +*Y* <sup>√</sup>ν)−<sup>1</sup> = (*<sup>X</sup>* <sup>+</sup>*<sup>Y</sup>* <sup>√</sup>ν)*pk*/<sup>2</sup> , we have that

14 Theory and Practice of Cryptography and Network Security Protocols and Technologies

*a*<sup>1</sup> +3*a*4(*a*<sup>4</sup> −2*a*1) *a*<sup>0</sup> +3(*a*<sup>0</sup> +*a*3)(*a*<sup>0</sup> −*a*3) −2*a*<sup>5</sup> +3*a*5(*a*<sup>5</sup> −2*a*2) (*a*<sup>2</sup> −*a*4) +3*a*1(*a*<sup>1</sup> −2*a*4) (*a*<sup>0</sup> −*a*3) +3*a*3(2*a*<sup>0</sup> −*a*3) −2*a*<sup>2</sup> +3*a*2(*a*<sup>2</sup> −2*a*5)

Degree extension *k* cost of a square in **F***pk* 6 4, 5*Mp* 12 18*Mp* +12*Sp* 24 84*Mp* +24*Sp*

In the particular case where *k* = 6 and *p* ≡ 2 (mod 9), the cost of a square with the Lenstra and Stam method is less than 0, 75*Mpk* , which is usually the ratio of a square compare to a multiplication.

*Example* 4.11. In **F***p*<sup>6</sup> , a square with Lenstra and Stam method cost 6 × 0, 75*Mp* ≈ 4, 5*Mp*. With the

The Tate pairing (and also the Ate, optimal Ate) is composed of two steps, first the Miller's execution and then a final exponentiation. This exponentiation is a very expensive operation as it takes place in

*<sup>r</sup>* <sup>=</sup> (*p<sup>k</sup>* <sup>−</sup>1)

<sup>√</sup>ν) avec *<sup>X</sup>*,*<sup>Y</sup>* <sup>∈</sup> **<sup>F</sup>***pk*/<sup>2</sup> .

<sup>√</sup>ν)*pk*/2−<sup>1</sup> can be decomposed in

<sup>√</sup>ν)*pk*/<sup>2</sup>

×(*X* +*Y*

(*X* +*Y*

The first part of the exponentiation uses the twisted elliptic curve and it is equivalent to computing the Frobenius map of elements in **F***pk* . The second part is a reduced exponentiation in **F***pk* which is

(*p<sup>k</sup>* −1)

where φ*k*(*p*) is the evaluation in *p* of the *k*-th cyclotomic polynomial.

performed with classical method for exponentiation.

We consider here the exponentiation to the power *<sup>p</sup>k*−<sup>1</sup>

*4.4.1. First part of the exponentiation*

<sup>φ</sup>*k*(*p*) = *pk*/<sup>3</sup> <sup>−</sup> *pk*/<sup>6</sup> <sup>+</sup> 1 and *pk*−<sup>1</sup>

algorithm is something like (*X* +*Y*

The computation of (*X* +*Y*

*<sup>r</sup>* is a large integer. In order to simplify this exponentiation it is split in two

*<sup>r</sup>* ,

<sup>φ</sup>*<sup>k</sup>* (*p*). We can first remark that if *<sup>k</sup>* <sup>=</sup> <sup>2</sup>*<sup>i</sup>*

<sup>φ</sup>*<sup>k</sup>* (*p*) = (*pk*/<sup>2</sup> <sup>−</sup> <sup>1</sup>)(*pk*/<sup>6</sup> <sup>+</sup> <sup>1</sup>). Using a twist, the result of Miller's

<sup>√</sup>ν)<sup>−</sup>1.

3*j* , then

<sup>φ</sup>*k*(*p*) <sup>×</sup> <sup>φ</sup>*k*(*p*)

. (7)

α<sup>2</sup> = B.

Granger, Page and Smart apply this method to construct the Table 3 [41].

classical ratio, a square in **F***p*<sup>6</sup> costs 15×0, 75*Mp* ≈ 10*Mp*.

**Table 3.** Complexity of a square in **<sup>F</sup>***pk*

4.4. The finale exponentiation

**<sup>F</sup>***pk* and the exponent *pk*−<sup>1</sup>

parts [48] using the fact that:

$$(X+Y\sqrt{\mathbf{v}})^{p^{k/2}-1} = (X+Y\sqrt{\mathbf{v}})^{2p^{k/2}}.$$

Raising an element of **<sup>F</sup>***pk* to a power *<sup>p</sup>k*/<sup>2</sup> is a Frobenius operation, which mainly consists in shifts. The total cost of the exponentiation to the power (*pk*/<sup>2</sup> <sup>−</sup>1) is a square in **<sup>F</sup>***pk* and a Frobenius application. Let (*X*′ +*Y*′ <sup>√</sup>ν) be the result of (*<sup>X</sup>* <sup>+</sup>*<sup>Y</sup>* <sup>√</sup>ν)*pk*/2<sup>−</sup>1.

We then have to compute (*X*′ +*Y*′ <sup>√</sup>ν)*pk*/<sup>6</sup>+<sup>1</sup> which is another application of the Frobenius.

Let <sup>γ</sup> be a root of *<sup>X</sup><sup>k</sup>* <sup>−</sup><sup>β</sup> in **<sup>F</sup>***pk* . An element *<sup>a</sup>* of **<sup>F</sup>***pk* can be decomposed in *<sup>a</sup>* = *k*−1 ∑ *i*=0 *ai*γ*<sup>i</sup>* , with *ai* ∈ **F***p*. *k*−1

The property of a finite field gives *a<sup>p</sup>* = ∑ *i*=0 *ai*γ*ip* and recursively

$$a^{p^j} = \sum\_{i=0}^{k-1} a\_i \mathfrak{p}^{ip^j}.$$

For *i* and *j* two integers let *qi j* and *ri j* be the quotient and the remainder of the Euclidien division of *ip<sup>j</sup>* by *k*, we know that

$$
\gamma^{ip^j} = \beta^{q\_{ij}\bmod(p)} \gamma^{r\_{ij}}.
$$

The computation of (*X*′ +*Y*′ <sup>√</sup>ν)*pk*/<sup>6</sup>+<sup>1</sup> can be decomposed in

$$(X' + Y'\sqrt{\mathbf{v}})^{p^{k/6}+1} = (X'^{p^{k/6}} + Y'^{p^{k/6}}\sqrt{\mathbf{v}}^{(p^{k/6})}) \times (X' + Y'\sqrt{\mathbf{v}})^{p^{k/6}}$$

For example, if we describe what happened for the variable *X*′ raised to the power *pk*/6, we obtain the following step

$$\begin{cases} \begin{aligned} X' &= \sum\_{i=0}^{k/2-1} x\_i \gamma^i, \\ X'^{p^{k/6}} &= \sum\_{i=0}^{k/2-1} x\_i \gamma^{i p^{(k/6)}}, \\ X'^{p^{k/6}} &= \sum\_{i=0}^{k/2-1} (x\_i \mathcal{B}^{q\_{i(k/6)}} \mod (p)) \mathcal{Y}^{i(k/6)}. \end{aligned} \end{cases}$$

We have to compute the *<sup>k</sup>* <sup>2</sup> products (*xi*β*qi*(*k*/6) mod(*p*) ), with *xi* and <sup>β</sup>*qi*(*k*/6) mod(*p*) in **<sup>F</sup>***p*. The total complexity of the first part of the exponentiation is 2*kMp* +*Spk* +*Mpk* plus shifts and multiplications by β.

Second part of the exponentiation

The second part of the exponentiation is the hard part. We use classical method of exponentiation like the Lucas sequences [16] or sliding windows [40]. In [67], more tricky method are developed.

10.5772/56295

67

http://dx.doi.org/10.5772/56295

where *<sup>e</sup>* <sup>=</sup> <sup>φ</sup>*<sup>k</sup>* (*p*)

5.1. Setting

*A* is represented by

5. Arithmetical optimisation

work of Freeman, Scott and Teske [33].

polynomial in γ with coefficients in **F***p*:

*k*−1 ∑ *i*=1

*ai*γ*<sup>i</sup>* and *B* by *B* =

reduction is reduced to multiplications by β and (*k* −1) additions:

*<sup>r</sup>* , and *n* is the integer giving the size of the window in bits, generally *n* = 4.

Efficient Computation for Pairing Based Cryptography: A State of the Art

As the pairings computation lays on arithmetic over finite fields, a way to improve the efficiency of computation of pairings is to improve the arithmetic of finite fields and extension of finite fields.

The elliptic curve used in pairing based cryptography are constructed throught the complex multiplication method. These methods of constructions do not allow to fixe *p* the characteristic of the field **F***p*, we can only choose the number of bits in the decomposition of *p*. As a consequence, the arithmetic of pairings is particular. We cannot choose *p* with a special structure which would provide an efficient arithmetic, like for example a sparse decomposition or a Mersenne or Pseudo Mersenne prime. A very nice overview of construction of elliptic curve for pairing based cryptography is available in the

We then begin this section with the presentation of efficient multiplications in finite fields and extensions of finite fields. We recall the different methods for a multiplication and we will provide a comparison of efficiency of these multiplications in Section 5.2, 5.3, 5.4. In Section 5.5, we will consider the representation of elements in a finite field. Indeed, in Section 5.1 we describe the classical representation of a finite field, this classical representation is used for the description of the multiplications. But it is possible, to have original representations of finite field, which can offer opportunities for improvement in pairing based cryptography. In Section 5.6 we will consider how the choice of coordinates can be a way for improving the efficiency of computation of pairings and on the equation of the elliptic curve.

We consider in this Section the cost of operations over **F***pk* in number of operations over **F***p*. We give the notations for the rest of the chapter. Let **F***<sup>p</sup>* be a finite field field of prime characteristic *p*, with *p* of thousands digits. Let **F***pk* be the extension of degree *k* of **F***p*. The extension **F***pk* is defined through an irreducible polynomial *P*(*X*) of degree *k*. Let *A* and *B* be two elements of **F***pk* . The elements of **F***pk* are described in the basis <sup>B</sup> = (1, <sup>γ</sup>, <sup>γ</sup>2,..., <sup>γ</sup>*k*−1), for <sup>γ</sup> a roots of *<sup>P</sup>*(*X*) in **<sup>F</sup>***pk* . An element of **<sup>F</sup>***pk* is a

**<sup>F</sup>***pk* = {

*k*−1 ∑ *i*=0 *bi*γ*<sup>i</sup>*

*k*−1 ∑ *i*=0 *ai*γ*<sup>i</sup>*

first one is the the product of the polynomials, to obtain the polynomial *C*(*X*) = *A*(*X*)×*B*(*X*) of degree (2*k*−2). The second step is the polynomial reduction modulo *P*(*X*). The cost of this reduction depends on the form of *P*(*X*). The more *P*(*X*) is sparse, the more the reduction is efficient. As a consequence, *P*(*X*) should be as possible chosen of the form *X<sup>k</sup>* −β, with β ∈ **F***<sup>p</sup>* [50]. In this case, the polynomial

*C*(*X*) = *C*0(*X*) +*C*1(*X*)*X<sup>k</sup>* ≡ *C*0(*X*) +β*C*1(*X*) mod(*P*(*X*)).

,*ai* ∈ **F***p*}.

. The product of *A* and *B* can be done in two steps. The

The Lucas sequence method induces a cost of a square and a multiplication in the intermediate field **<sup>F</sup>***pk*/<sup>2</sup> for each bit of the exponent. The sliding window method has the advantage that the squares are computed in the cyclotomic subgroup and consequently we can use the method described in Section 4.3. The complexity of the two methods is linearly related to the number of the bits in the binary decomposition of the exponent, we recall here the complexity of the methods and refer to for instance the book [25] for more details.

Let *br* be the number of bits of *r*, the prime number dividing the cardinal of *E*. Let *bpk* be the number of bits of *pk*. The respective size of *br*, *bpk* , *r* and *p<sup>k</sup>* are fixed by the security level we want to reach. We give them in the Table 4. The number of positive integers smaller than *k* and prime with *k* is ϕ(*k*), the Euler totent function evaluated at *k*. The number ϕ(*k*) is also the number of primitive *k*-roots of unity, then it is the degree of the polynomial φ*k*(*p*). The exponent of the second part of the exponentiation is ( <sup>ϕ</sup>(*k*) *<sup>k</sup> bpk* <sup>−</sup>*br*) bits.

The number of squares and multiplications involved for the computation of the exponentiation depends on <sup>ϕ</sup>(*k*) *<sup>k</sup> bpk* <sup>−</sup>*br* = (τ*k*<sup>γ</sup> <sup>−</sup>1)*br*, where

$$
\mathcal{Y} = \frac{b\_{p^k}}{b\_r},
$$

$$\pi\_k = \frac{\mathfrak{g}(k)}{k} = \begin{cases} 1/2 \text{ si } k = 2^l, \text{ i } \gg 1\\ 1/3 \text{ si } k = 2^l 3^j, \text{ i, } j \gg 1 \end{cases}$$

The number γ is related to the security levels given in the Table 4 and its is a good appreciation of the total complexity of the exponentiation.


**Table 4.** Security level

The complexity of the Lucas sequance method is [16]

$$C\_{Luc} = \left(M\_{p^{k/2}} + S\_{p^{k/2}}\right) \log\_2\left(\frac{\phi\_k(p)}{r}\right).$$

The complexity of the sliding window method is [40]

$$C\_{\rm sw} = \left(\frac{\log\_2(e)}{\log\_2(p)} + \log\_2(p)\right) S\_{G\_{\Phi\_k(p)}} + \left(\frac{\log\_2(e)}{\log\_2(p)} \left(2^{n-1} - 1\right) + \frac{\log\_2(e)}{n+2} - 1\right) M\_{p^k, 1}$$

where *<sup>e</sup>* <sup>=</sup> <sup>φ</sup>*<sup>k</sup>* (*p*) *<sup>r</sup>* , and *n* is the integer giving the size of the window in bits, generally *n* = 4.

#### 5. Arithmetical optimisation

16 Theory and Practice of Cryptography and Network Security Protocols and Technologies

The second part of the exponentiation is the hard part. We use classical method of exponentiation like

The Lucas sequence method induces a cost of a square and a multiplication in the intermediate field **<sup>F</sup>***pk*/<sup>2</sup> for each bit of the exponent. The sliding window method has the advantage that the squares are computed in the cyclotomic subgroup and consequently we can use the method described in Section 4.3. The complexity of the two methods is linearly related to the number of the bits in the binary decomposition of the exponent, we recall here the complexity of the methods and refer to for instance

Let *br* be the number of bits of *r*, the prime number dividing the cardinal of *E*. Let *bpk* be the number of bits of *pk*. The respective size of *br*, *bpk* , *r* and *p<sup>k</sup>* are fixed by the security level we want to reach. We give them in the Table 4. The number of positive integers smaller than *k* and prime with *k* is ϕ(*k*), the Euler totent function evaluated at *k*. The number ϕ(*k*) is also the number of primitive *k*-roots of unity, then it is the degree of the polynomial φ*k*(*p*). The exponent of the second part of the exponentiation is

The number of squares and multiplications involved for the computation of the exponentiation depends

<sup>γ</sup> <sup>=</sup> *bpk br* ,

1/2 si *k* = 2*<sup>i</sup>*

1/3 si *k* = 2*<sup>i</sup>*

The number γ is related to the security levels given in the Table 4 and its is a good appreciation of the

Security level in bits 80 128 192 256 Minimal number of for *r* 160 256 384 512 Minimal number of for *pk* 1 024 3 072 7 680 15 360

, *i* 1

, *i*, *j* 1 .

3*j*

*br* 6,4 12 20 30

<sup>φ</sup>*k*(*p*) *r*

 .

<sup>+</sup> log2(*e*) *<sup>n</sup>*+<sup>2</sup> <sup>−</sup><sup>1</sup>

*Mpk* ,

the Lucas sequences [16] or sliding windows [40]. In [67], more tricky method are developed.

Second part of the exponentiation

the book [25] for more details.

*<sup>k</sup> bpk* <sup>−</sup>*br*) bits.

**Table 4.** Security level

*Csw* =

*<sup>k</sup> bpk* <sup>−</sup>*br* = (τ*k*<sup>γ</sup> <sup>−</sup>1)*br*, where

total complexity of the exponentiation.

<sup>τ</sup>*<sup>k</sup>* <sup>=</sup> <sup>ϕ</sup>(*k*)

<sup>γ</sup> <sup>=</sup> *bpk*

The complexity of the Lucas sequance method is [16]

The complexity of the sliding window method is [40]

log2(*p*) <sup>+</sup>log2(*p*)

log2(*e*)

*<sup>k</sup>* <sup>=</sup>

*CLuc* = (*Mpk*/<sup>2</sup> +*Spk*/<sup>2</sup> )log2

*SG*<sup>φ</sup>*<sup>k</sup>* (*p*) +

 log2(*e*) log2(*p*)

2*n*−<sup>1</sup> −1

( <sup>ϕ</sup>(*k*)

on <sup>ϕ</sup>(*k*)

As the pairings computation lays on arithmetic over finite fields, a way to improve the efficiency of computation of pairings is to improve the arithmetic of finite fields and extension of finite fields.

The elliptic curve used in pairing based cryptography are constructed throught the complex multiplication method. These methods of constructions do not allow to fixe *p* the characteristic of the field **F***p*, we can only choose the number of bits in the decomposition of *p*. As a consequence, the arithmetic of pairings is particular. We cannot choose *p* with a special structure which would provide an efficient arithmetic, like for example a sparse decomposition or a Mersenne or Pseudo Mersenne prime. A very nice overview of construction of elliptic curve for pairing based cryptography is available in the work of Freeman, Scott and Teske [33].

We then begin this section with the presentation of efficient multiplications in finite fields and extensions of finite fields. We recall the different methods for a multiplication and we will provide a comparison of efficiency of these multiplications in Section 5.2, 5.3, 5.4. In Section 5.5, we will consider the representation of elements in a finite field. Indeed, in Section 5.1 we describe the classical representation of a finite field, this classical representation is used for the description of the multiplications. But it is possible, to have original representations of finite field, which can offer opportunities for improvement in pairing based cryptography. In Section 5.6 we will consider how the choice of coordinates can be a way for improving the efficiency of computation of pairings and on the equation of the elliptic curve.

#### 5.1. Setting

We consider in this Section the cost of operations over **F***pk* in number of operations over **F***p*. We give the notations for the rest of the chapter. Let **F***<sup>p</sup>* be a finite field field of prime characteristic *p*, with *p* of thousands digits. Let **F***pk* be the extension of degree *k* of **F***p*. The extension **F***pk* is defined through an irreducible polynomial *P*(*X*) of degree *k*. Let *A* and *B* be two elements of **F***pk* . The elements of **F***pk* are described in the basis <sup>B</sup> = (1, <sup>γ</sup>, <sup>γ</sup>2,..., <sup>γ</sup>*k*−1), for <sup>γ</sup> a roots of *<sup>P</sup>*(*X*) in **<sup>F</sup>***pk* . An element of **<sup>F</sup>***pk* is a polynomial in γ with coefficients in **F***p*:

$$\mathbb{F}\_{p^k} = \{ \sum\_{i=0}^{k-1} a\_i \gamma^i, a\_i \in \mathbb{F}\_p \}.$$

*A* is represented by *k*−1 ∑ *i*=1 *ai*γ*<sup>i</sup>* and *B* by *B* = *k*−1 ∑ *i*=0 *bi*γ*<sup>i</sup>* . The product of *A* and *B* can be done in two steps. The first one is the the product of the polynomials, to obtain the polynomial *C*(*X*) = *A*(*X*)×*B*(*X*) of degree (2*k*−2). The second step is the polynomial reduction modulo *P*(*X*). The cost of this reduction depends on the form of *P*(*X*). The more *P*(*X*) is sparse, the more the reduction is efficient. As a consequence, *P*(*X*) should be as possible chosen of the form *X<sup>k</sup>* −β, with β ∈ **F***<sup>p</sup>* [50]. In this case, the polynomial reduction is reduced to multiplications by β and (*k* −1) additions:

$$C(X) = C\_0(X) + C\_1(X)X^k \equiv C\_0(X) + \beta C\_1(X) \bmod (P(X)).$$

with, *C*0(*X*),*C*1(*X*) of degree (*k* −1).

The following theorem [50, Theorem 3.75] gives us a natural construction of the extension **F***pk* using a sparse representation.

10.5772/56295

69

http://dx.doi.org/10.5772/56295

Efficient Computation for Pairing Based Cryptography: A State of the Art

3. Compute the evaluation of *C* in these (2*k* −1) values,

The complexity of a multiplication by interpolation depends

2. on the multiplications in **F***<sup>p</sup> C*(α*i*) = *A*(α*i*)×*B*(α*i*),

3. and on the reconstruction of the polynomial expression of *C*(*X*).

4. Use these evaluations of *C*(*X*) to reconstruct by interpolation the polynomial *C*(*X*).

If we compare the interpolation method with the school book method, we substitute some multiplications in **F***<sup>p</sup>* by multiplications by constants in **F***p*. The constants are determined by the choice of the α*<sup>i</sup>* values. The drawback is that the multiplication by interpolation need more additions, but as an addition in **F***<sup>p</sup>* is less expensive than a multiplication, for some degree *k* interpolation methods are

Let *Ma* the cost of a multiplication by the constant *<sup>a</sup>* in **<sup>F</sup>***p*. The evaluations in (α*i*){*i*=0...(2*k*−1)} cost

2(2*k* −1)(*k* −1)(*Ap* +*CMp*),

*A*(α*i*) = *a*<sup>0</sup> +α*i*(*a*<sup>1</sup> +α*i*(*a*<sup>2</sup> +α*i*[...])).

The computation of the *C*(α*i*) = *A*(α*i*) × *B*(α*i*) involves (2*k* − 1) multiplications in **F***p*, which costs

We suppose that we have obtained the evaluation of the polynomial *A*(*X*) and *B*(*X*) in 2*k* −1, denoted (α0,α1,...,α2*k*−2). We then have the image of *C*(*X*) = *A*(*X*) × *B*(*X*) in these 2*k* − 1 points. The reconstruction of the coefficients of *C*(*X*) using the Lagrange interpolation is done through the formula:

> 2*k*−2 ∏ *j*=0, *j*�=*i*

2*k*−2 ∏ *j*=0, *j*�=*i* (*X* −α*j*)

. (8)

(α*<sup>i</sup>* −α*j*)

(2*k* −1)*Mp* + (2*k* −1)(4*k* −3)*CMp* +2(2*k* −1)(3*k* −2)*Ap*. (9)

Two classical method of interpolation exist, the Lagrange and the Newton interpolation methods.

*C*(α*i*) = *A*(α*i*)*B*(α*i*).

1. on the evaluation of the *A*(α*i*), *B*(α*i*),

more efficient than the school book method.

when executed using the Horner scheme:

*5.3.1. Lagrange's interpolation method*

The complexity of Lagrange's interpolation is

*C*(*X*) =

2*k*−2 ∑ *i*=0

*C*(α*i*)×

(2*k* −1)*Mp*.

Theorem 5.1. *Let k be an integer and* **F***pk an extension of degree k of* **F***p, for p a prime number. There exists* β *an element of* **F***<sup>p</sup> which is not a k-th roots in* **F***<sup>p</sup> and such that the polynomial X<sup>k</sup>* −β *is irreducible over* **F***p.*

Thus, we can consider that the complexity of a product in **F***pk* is highly dependent on the complexity of the product of two polynomials, neglecting the complexity of the modular reduction. We introduce above the possible multiplications of polynomials.

#### 5.2. The school book method

As the name gives the hint, the school book multiplication is the one we learned at school. The school book method of two polynomials is the following

$$A(\boldsymbol{\gamma}) \times B(\boldsymbol{\gamma}) = \sum\_{i=0}^{2k-1} \left( \sum\_{j=0}^{i} (a\_j b\_{i-j}) \right) \boldsymbol{\gamma}^i.$$

This simple method is very expensive, indeed its complexity is quadratic in the degree of the polynomials. The cost of this method is *k*<sup>2</sup> multiplications in **F***<sup>p</sup>* plus *k*(2*k* − 1) addition, thus the complexity is *k*(2*k* −1)*Ap* +*k*2*Mp*.

The interpolation method are an alternative to the school book method, there are efficient for *k* greater than a fixed value. This value depends on the method.

#### 5.3. Interpolation method

Let *<sup>A</sup>*(*X*) = *<sup>a</sup>*<sup>0</sup> <sup>+</sup> *<sup>a</sup>*1*<sup>X</sup>* <sup>+</sup> ... <sup>+</sup> *ak*<sup>−</sup>1*Xk*−<sup>1</sup> and *<sup>B</sup>*(*X*) = *<sup>b</sup>*<sup>0</sup> <sup>+</sup> *<sup>b</sup>*1*<sup>X</sup>* <sup>+</sup> ... <sup>+</sup> *bk*<sup>−</sup>1*Xk*−<sup>1</sup> be the polynomials obtained by substitution (γ becomes *X*). The result *C*(*X*) of *A*(*X*) × *B*(*X*) is a polynomial of degree (2*k* − 1). It is known that a polynomial of degree *m* is determined by its image in (*m* + 1) distinct values.

Theorem 5.2. *Let P*(*X*) *be a polynomial of degree m, then P*(*X*) *is determined by the image of* (*m*+1) *distinct values.*

The multiplications by the interpolation method use in this theorem. The methodology is to find (2*k*−1) images of the polynomial *C*(*X*) and then to reconstruct *C*(*X*) by interpolation. All multiplications by interpolation follow this scheme


The complexity of a multiplication by interpolation depends

1. on the evaluation of the *A*(α*i*), *B*(α*i*),

18 Theory and Practice of Cryptography and Network Security Protocols and Technologies

*A*(γ)×*B*(γ) =

The following theorem [50, Theorem 3.75] gives us a natural construction of the extension **F***pk* using a

Theorem 5.1. *Let k be an integer and* **F***pk an extension of degree k of* **F***p, for p a prime number. There exists* β *an element of* **F***<sup>p</sup> which is not a k-th roots in* **F***<sup>p</sup> and such that the polynomial X<sup>k</sup>* −β *is*

Thus, we can consider that the complexity of a product in **F***pk* is highly dependent on the complexity of the product of two polynomials, neglecting the complexity of the modular reduction. We introduce

As the name gives the hint, the school book multiplication is the one we learned at school. The school

 *i* ∑ *j*=0

(*ajbi*<sup>−</sup>*j*)

 γ*i* .

2*k*−1 ∑ *i*=0

This simple method is very expensive, indeed its complexity is quadratic in the degree of the polynomials. The cost of this method is *k*<sup>2</sup> multiplications in **F***<sup>p</sup>* plus *k*(2*k* − 1) addition, thus the

The interpolation method are an alternative to the school book method, there are efficient for *k* greater

Let *<sup>A</sup>*(*X*) = *<sup>a</sup>*<sup>0</sup> <sup>+</sup> *<sup>a</sup>*1*<sup>X</sup>* <sup>+</sup> ... <sup>+</sup> *ak*<sup>−</sup>1*Xk*−<sup>1</sup> and *<sup>B</sup>*(*X*) = *<sup>b</sup>*<sup>0</sup> <sup>+</sup> *<sup>b</sup>*1*<sup>X</sup>* <sup>+</sup> ... <sup>+</sup> *bk*<sup>−</sup>1*Xk*−<sup>1</sup> be the polynomials obtained by substitution (γ becomes *X*). The result *C*(*X*) of *A*(*X*) × *B*(*X*) is a polynomial of degree (2*k* − 1). It is known that a polynomial of degree *m* is determined by its image in (*m* + 1) distinct

Theorem 5.2. *Let P*(*X*) *be a polynomial of degree m, then P*(*X*) *is determined by the image of* (*m*+1)

The multiplications by the interpolation method use in this theorem. The methodology is to find (2*k*−1) images of the polynomial *C*(*X*) and then to reconstruct *C*(*X*) by interpolation. All multiplications by

with, *C*0(*X*),*C*1(*X*) of degree (*k* −1).

above the possible multiplications of polynomials.

book method of two polynomials is the following

than a fixed value. This value depends on the method.

5.2. The school book method

complexity is *k*(2*k* −1)*Ap* +*k*2*Mp*.

5.3. Interpolation method

interpolation follow this scheme

1. Find (2*k* −1) distinct values in **F***<sup>p</sup>*

denoted by α0,α1,...,α2*k*<sup>−</sup>2.

2. Evaluate the polynomials *A*(*X*) and *B*(*X*) in these values

keep in memory *A*(α0),...,*A*(α2*k*−2),*B*(α0),...,*B*(α0).

values.

*distinct values.*

sparse representation.

*irreducible over* **F***p.*


If we compare the interpolation method with the school book method, we substitute some multiplications in **F***<sup>p</sup>* by multiplications by constants in **F***p*. The constants are determined by the choice of the α*<sup>i</sup>* values. The drawback is that the multiplication by interpolation need more additions, but as an addition in **F***<sup>p</sup>* is less expensive than a multiplication, for some degree *k* interpolation methods are more efficient than the school book method.

Let *Ma* the cost of a multiplication by the constant *<sup>a</sup>* in **<sup>F</sup>***p*. The evaluations in (α*i*){*i*=0...(2*k*−1)} cost

$$2(2k-1)(k-1)\left(A\_p + CM\_p\right),$$

when executed using the Horner scheme:

$$A(\alpha\_l) = a\_0 + \alpha\_l \left(a\_1 + \alpha\_l (a\_2 + \alpha\_l [\dots])\right) \dots$$

The computation of the *C*(α*i*) = *A*(α*i*) × *B*(α*i*) involves (2*k* − 1) multiplications in **F***p*, which costs (2*k* −1)*Mp*.

Two classical method of interpolation exist, the Lagrange and the Newton interpolation methods.

#### *5.3.1. Lagrange's interpolation method*

We suppose that we have obtained the evaluation of the polynomial *A*(*X*) and *B*(*X*) in 2*k* −1, denoted (α0,α1,...,α2*k*−2). We then have the image of *C*(*X*) = *A*(*X*) × *B*(*X*) in these 2*k* − 1 points. The reconstruction of the coefficients of *C*(*X*) using the Lagrange interpolation is done through the formula:

$$C(X) = \sum\_{l=0}^{2k-2} \left( C(\mathfrak{a}\_l) \times \frac{\prod\_{j=0, j\neq l}^{2k-2} (X - \mathfrak{a}\_j)}{\prod\_{j=0, j\neq l}^{2k-2} (\mathfrak{a}\_l - \mathfrak{a}\_j)} \right). \tag{8}$$

The complexity of Lagrange's interpolation is

$$(2k-1)M\_P + (2k-1)(4k-3)CM\_P + 2(2k-1)(3k-2)A\_P. \tag{9}$$

#### *5.3.2. Newton's interpolation*

As in the Lagrange's interpolation, we dispose of the *C*(α*i*)s and we want to find the coefficients of *C*(*X*). The Newton's interpolation needs the construction of intermediates values.

10.5772/56295

71

http://dx.doi.org/10.5772/56295

(10)

The Lagrange's interpolation is very important when computations can be parallelised. Indeed, the

additions and multiplications by constants than the Lagrange's one, but we cannot parallelise the

The Lagrange's interpolation should be privileged when computations can be parallelised and Newton

The Karatsuba multiplication is a straightforward application of the Newton's method, for polynomials of degree 1. The result of the multiplication is a polynomial of degree 2, then we need 2 + 1 = 3 points of interpolation. These values are {0, 1,∞}. The Karatsuba multiplication provide the product of two polynomials of degree 1 in 3 multiplications in the base field, instead of 4 using the school book method. The multiplication by constants in the Newton multiplication are free, because of the choice of the interpolation values. Let *A*(*X*) = *A*<sup>0</sup> +*A*1*X* and *B*(*X*) = *B*<sup>0</sup> +*B*1*X* be two polynomials of degree 1

*C*(0)=(*A*1*X* +*A*0)(*B*1*X* +*B*0) mod(*X*),

*C*(1)=(*A*1*X* +*A*0)(*B*1*X* +*B*0) mod(*X* −1), = (*A*<sup>0</sup> +*A*1)×(*B*<sup>0</sup> +*B*1), *C*(∞)=(*A*1*X* +*A*0)(*B*1*X* +*B*0) mod(*X* −∞), = *A*<sup>1</sup> ×*B*<sup>1</sup> ×*X*<sup>2</sup> mod(*X* −∞).

The evaluation of polynomial *C*(*X*) in the 3 values involves 2*Ap* +3*Mp* operations in the base field **F***p*.

Then, we use the formulas in the Newton interpolation to reconstruct the polynomial *C*(*X*).

Method Lagrange Newton

*Ap* 12*k*<sup>2</sup> −14*k* +4 8*k*<sup>2</sup> −12*k* +4 *CMp* 8*k*<sup>2</sup> −10*k* +3 8*k*<sup>2</sup> −12*k* +4 *Mp* (2*k* −1) (2*k* −1)

(α*<sup>i</sup>* <sup>−</sup>α*j*) are independent. The Newton's interpolation involves less

Efficient Computation for Pairing Based Cryptography: A State of the Art

computation of the *C*(α*i*) ×

❵

Operation

**Table 5.** Complexity in number of operation over the base field

5.4. Karatsuba and Toom Cook methods

*5.4.1. Karatsuba's method*

and *C*(*X*) = *A*(*X*)×*B*(*X*).

when the size of the device is limited, typically for smart cards.

We evaluate the polynomial *C*(*X*) in the point {0, 1,∞} using equations 10.

= *A*<sup>0</sup> ×*B*0,

computation. The *c*′

∏ *j*�=*i*

∏ *j*�=*i* (*X* −α*j*)

*<sup>i</sup>* must be computed one after another.

❵❵❵❵❵❵❵❵❵❵

The first step is the computation of the values *c*′ *i*

$$\begin{cases} \begin{aligned} c\_0' &= C(\mathfrak{a}\_0) \\ c\_1' &= (C(\mathfrak{a}\_1) - c\_0') \frac{1}{(\mathfrak{a}\_1 - \mathfrak{a}\_0)} \\ c\_2' &= \left( (C(\mathfrak{a}\_2) - c\_0') \frac{1}{(\mathfrak{a}\_2 - \mathfrak{a}\_0)} - c\_1' \right) \frac{1}{(\mathfrak{a}\_2 - \mathfrak{a}\_1)} \\ &\vdots = \vdots \\ c\_{2k-2}' &= \left( (C(\mathfrak{a}\_{2k-2}) - c\_0') \frac{1}{(\mathfrak{a}\_{2k-2} - \mathfrak{a}\_0)} - c\_1' \right) \frac{1}{(\mathfrak{a}\_{2k-2} - \mathfrak{a}\_1)} - \dots \end{aligned} \end{cases}$$

With the *c*′ *i* s, the expression of *C*(*X*) is

$$C(X) = c\_0' + c\_1'(X - \alpha\_0) + c\_2'(X - \alpha\_0)(X - \alpha\_1) \\ \newline + \dots + c\_{2k-2}'(X - \alpha\_0)(X - \alpha\_1)\dots(X - \alpha\_{2k-2})\dots$$

The reconstruction of the coefficients of *C*(*X*) can be done using the Horner's scheme

$$C(X) = c\_0' + (X - \alpha\_0)[c\_1' + (X - \alpha\_1)(c\_2' + (X - \alpha\_2)(\dots))\dots]$$

$$\dots + (X - \alpha\_{2k})[c\_{2k-1}' + (X - \alpha\_{2k-1})(c\_{2k-2}'))))[.$$

The efficiency of the multiplication by interpolation depends on the choice of the α*i*s. The Newton's interpolation involves divisions be the differences of the α*i*s, these elements can be precomputed once for all as the <sup>α</sup>*i*s are fixed. Furthermore, the divisions by (α*<sup>i</sup>* <sup>−</sup> <sup>α</sup>*j*)−<sup>1</sup> can be transformed in multiplication by constants, as we work in a finite field.

The complexity of Newton's interpolation is the sum of the complexity of the computation of the*C*(α*i*), the *c*′ *<sup>i</sup>* and the reconstruction of the coefficients of *C*(*X*).

The complexity of Newton's interpolation is

$$4(2k^2 - 3k + 1)A\_p + 4(2k^2 - 3k + 1)CM\_p + (2k - 1)M\_p.$$

#### *5.3.3. Comparison between the two methods*

The two methods involves the same number of multiplications in the base field **F***p*: (2*k* − 1), for polynomials of degree (*k* −1).

The Lagrange's interpolation is very important when computations can be parallelised. Indeed, the ∏ (*X* −α*j*)

computation of the *C*(α*i*) × *j*�=*i* ∏ *j*�=*i* (α*<sup>i</sup>* <sup>−</sup>α*j*) are independent. The Newton's interpolation involves less

additions and multiplications by constants than the Lagrange's one, but we cannot parallelise the computation. The *c*′ *<sup>i</sup>* must be computed one after another.


**Table 5.** Complexity in number of operation over the base field

The Lagrange's interpolation should be privileged when computations can be parallelised and Newton when the size of the device is limited, typically for smart cards.

#### 5.4. Karatsuba and Toom Cook methods

#### *5.4.1. Karatsuba's method*

20 Theory and Practice of Cryptography and Network Security Protocols and Technologies

*C*(*X*). The Newton's interpolation needs the construction of intermediates values.

As in the Lagrange's interpolation, we dispose of the *C*(α*i*)s and we want to find the coefficients of

*i*

<sup>0</sup>) <sup>1</sup> (α1−α0)

<sup>1</sup>(*<sup>X</sup>* <sup>−</sup>α0) +*c*′

The reconstruction of the coefficients of *C*(*X*) can be done using the Horner's scheme

<sup>0</sup> + (*<sup>X</sup>* <sup>−</sup>α0)[*c*′

...+ (*<sup>X</sup>* <sup>−</sup>α2*k*)[*c*′

<sup>0</sup>) <sup>1</sup> (α2−α0) <sup>−</sup>*c*′

<sup>0</sup>) <sup>1</sup>

1 �

(α2*k*−2−α0) <sup>−</sup>*c*′

1 (α2−α1)

> 1 �

<sup>2</sup>(*<sup>X</sup>* <sup>−</sup>α0)(*<sup>X</sup>* <sup>−</sup>α1)

<sup>2</sup>*k*−2(*<sup>X</sup>* <sup>−</sup>α0)(*<sup>X</sup>* <sup>−</sup>α1)...(*<sup>X</sup>* <sup>−</sup>α2*k*−2).

<sup>2</sup>*k*−<sup>1</sup> + (*<sup>X</sup>* <sup>−</sup>α2*k*−1)*c*′

<sup>1</sup> + (*<sup>X</sup>* <sup>−</sup>α1)(*c*′

The efficiency of the multiplication by interpolation depends on the choice of the α*i*s. The Newton's interpolation involves divisions be the differences of the α*i*s, these elements can be precomputed once for all as the <sup>α</sup>*i*s are fixed. Furthermore, the divisions by (α*<sup>i</sup>* <sup>−</sup> <sup>α</sup>*j*)−<sup>1</sup> can be transformed in

The complexity of Newton's interpolation is the sum of the complexity of the computation of the*C*(α*i*),

4(2*k*<sup>2</sup> −3*k* +1)*Ap* +4(2*k*<sup>2</sup> −3*k* +1)*CMp* + (2*k* −1)*Mp*.

The two methods involves the same number of multiplications in the base field **F***p*: (2*k* − 1), for

1 (α2*k*−2−α1) <sup>−</sup>...

<sup>2</sup> + (*<sup>X</sup>* <sup>−</sup>α2)(...

<sup>2</sup>*k*−2]))].

*5.3.2. Newton's interpolation*

The first step is the computation of the values *c*′

*c*′

*c*′

*c*′ <sup>2</sup> = �

> . . . <sup>=</sup> . . .

<sup>0</sup> = *<sup>C</sup>*(α0)

�

<sup>1</sup> = (*C*(α1)−*c*′

<sup>0</sup> <sup>+</sup>*c*′

+...+*c*′

(*C*(α2)−*c*′

(*C*(α2*k*−2)−*c*′

With the *c*′

the *c*′

*i*

*c*′ <sup>2</sup>*k*−<sup>2</sup> =

s, the expression of *C*(*X*) is

*C*(*X*) = *c*′

*C*(*X*) = *c*′

multiplication by constants, as we work in a finite field.

The complexity of Newton's interpolation is

*5.3.3. Comparison between the two methods*

polynomials of degree (*k* −1).

*<sup>i</sup>* and the reconstruction of the coefficients of *C*(*X*).

The Karatsuba multiplication is a straightforward application of the Newton's method, for polynomials of degree 1. The result of the multiplication is a polynomial of degree 2, then we need 2 + 1 = 3 points of interpolation. These values are {0, 1,∞}. The Karatsuba multiplication provide the product of two polynomials of degree 1 in 3 multiplications in the base field, instead of 4 using the school book method. The multiplication by constants in the Newton multiplication are free, because of the choice of the interpolation values. Let *A*(*X*) = *A*<sup>0</sup> +*A*1*X* and *B*(*X*) = *B*<sup>0</sup> +*B*1*X* be two polynomials of degree 1 and *C*(*X*) = *A*(*X*)×*B*(*X*).

We evaluate the polynomial *C*(*X*) in the point {0, 1,∞} using equations 10.

$$\begin{aligned} C(0) &= (A\_1 X + A\_0)(B\_1 X + B\_0) \bmod(X), \\ &= A\_0 \times B\_0, \\\ C(1) &= (A\_1 X + A\_0)(B\_1 X + B\_0) \bmod(X - 1), \\ &= (A\_0 + A\_1) \times (B\_0 + B\_1), \\\ C(\Leftrightarrow) &= (A\_1 X + A\_0)(B\_1 X + B\_0) \bmod(X - \Leftrightarrow), \\ &= A\_1 \times B\_1 \times X^2 \bmod(X - \Leftrightarrow). \end{aligned} \tag{10}$$

The evaluation of polynomial *C*(*X*) in the 3 values involves 2*Ap* +3*Mp* operations in the base field **F***p*. Then, we use the formulas in the Newton interpolation to reconstruct the polynomial *C*(*X*).

$$\begin{cases} c\_1' = C(0), \\ \quad = A\_0 B\_0, \\ c\_1' = (C(1) - c\_0') \frac{1}{(1-0)}, \\ \quad = (A\_0 + A\_1)(B\_0 + B\_1) - A\_0 B\_0, \\ c\_2' = \left( (C(\infty) - c\_0') \frac{1}{(\infty - 0)} - c\_1' \right) \frac{1}{(\infty - 1)}, \\ \quad = \left( (A\_1 B\_1 X^2 - A\_0 B\_0) \frac{1}{(X-0)} - ((A\_0 + A\_1)(B\_0 + B\_1) - A\_0 B\_0) \right) \frac{1}{(X-1)} \bmod(X - \infty), \\ \quad = \frac{A\_1 B\_1 X^2}{X^2} - \frac{A\_0 B\_0}{X^2} - \frac{((A\_0 + A\_1)(B\_0 + B\_1) - A\_0 B\_0)}{X} \bmod(X - \infty), \\ \quad = A\_1 B\_1. \end{cases}$$

$$\begin{array}{l} C(X) = c\_0' + c\_1'X + c\_2'X(X - 1), \\ = A\_0B\_0 + ((A\_0 + A\_1)(B\_0 + B\_1) - A\_0B\_0)X + A\_1B\_1X(X - 1), \\ = A\_0B\_0 + ((A\_0 + A\_1)(B\_0 + B\_1) - A\_0B\_0 - A\_1B\_1)X + A\_1B\_1X^2. \end{array}$$

We can resume the computation of the polynomial *C*(*X*) using Karatsuba's multiplication by the following equation

$$\begin{cases} \quad c\_0 = A\_0 \times B\_0, \\ \quad c\_1 = (A\_0 + A\_1) \times (B\_0 + B\_1), \\ \quad c\_2 = A\_1 \times B\_1, \\ \quad C(X) = c\_0 + (c\_1 - c\_0 - c\_2)X + c\_2 X^2. \end{cases} \tag{11}$$

10.5772/56295

73

http://dx.doi.org/10.5772/56295

*5.4.2. Toom Cook 3 multiplication*

additions.

and divisions by constants that we cannot avoid.

 

*A*(0) = *A*0, *Sp*<sup>1</sup> = *A*<sup>0</sup> +*A*2, *A*(1) = *Sp*<sup>1</sup> +*A*1, *A*(−1) = *Sp*<sup>1</sup> −*A*1,

*C*(0) = *A*(0)×*B*(0) = *A*0*B*0, *C*(1) = *A*(1)×*B*(1), *C*(−1) = *A*(−1)×*B*(−1), *C*(2) = *A*(2)×*B*(2),

We begin with the evaluation of *C*(*X*) in the α*<sup>i</sup>* pour *i* = 0, 1, 2, 3, 4.

 

The reconstruction of *C*(*X*) is then

This step can be resume by the formula

We apply the Newton's method to find the coefficients *c*′

*C*(*X*) = *c*′

 

*c*′

*c*′

*c*′ <sup>2</sup> <sup>=</sup> <sup>1</sup> 2 �

*c*′ <sup>3</sup> <sup>=</sup> <sup>1</sup>

*c*′

<sup>0</sup> <sup>+</sup>*c*′

<sup>0</sup> = *C*(0),

<sup>4</sup> = *A*2*B*2.

<sup>1</sup>*<sup>X</sup>* <sup>+</sup> *<sup>c</sup>*′

*c*′

<sup>1</sup> <sup>=</sup> *<sup>C</sup>*(1)−*c*′

Exactly like the Karatsuba's multiplication, Toom Cook 3 multiplication is an application of Newton's interpolation. The Toom Cook 3 method provide the product of polynomials of degree 2 with 5 multiplications of coefficients, instead of 9 using the school book method multiplication. The values for the interpolation are {0, 1,−1, 2,∞}. Unlike the Karatsuba's method, there are few multiplications

Efficient Computation for Pairing Based Cryptography: A State of the Art

Let *A*(*X*) = *A*<sup>0</sup> + *A*1*X* + *A*2*X*<sup>2</sup> and *B*(*X*) = *B*<sup>0</sup> + *B*1*X* + *B*2*X*<sup>2</sup> be polynomials of degree 2 and *C*(*X*) = *A*(*X*) × *B*(*X*) obtained using the Toom Cook method. The evaluation part of Toom Cook 3 multiplication involves 10 additions of *Ai* and *Bi*, for *i* = 0, 1, 2. The evaluation of *A*(*X*) needs 5

> *A*(2) = *A*<sup>0</sup> +2*A*<sup>1</sup> +4*A*2, *A*(∞) = *A*2*X*<sup>2</sup> mod(*X* −∞).

*C*(∞) = *A*(∞)×*B*(∞) = *A*2*B*2*X*<sup>4</sup> mod(*X* −∞).

0,

6 *c*′ <sup>0</sup> <sup>−</sup> <sup>1</sup> 3 *c*′ <sup>1</sup> <sup>−</sup> <sup>1</sup> 3 *c*′ 2,

<sup>2</sup>*X*(*<sup>X</sup>* <sup>−</sup>1) +*c*′

*C*(−1)−*c*′

<sup>6</sup>*C*(2)<sup>−</sup> <sup>1</sup>

*i*

<sup>0</sup> <sup>+</sup>*c*′ 1 � ,

<sup>4</sup>*X*(*<sup>X</sup>* <sup>−</sup>1)(*<sup>X</sup>* <sup>+</sup>1)(−2).

<sup>3</sup>*X*(*<sup>X</sup>* <sup>−</sup>1)(*<sup>X</sup>* <sup>+</sup>1)

For polynomials of degree 1, the complexity of Karatsuba's multiplication is 3*Mp* +4*Ap*.

The Karatsuba's multiplication can be recursively applied for polynomials of degree greater than 1. Let *<sup>A</sup>*(*X*) = *<sup>A</sup>*<sup>0</sup> <sup>+</sup>*A*1*<sup>X</sup>* <sup>+</sup>...*AmX<sup>m</sup>*, we can split *<sup>A</sup>*(*X*) in two parts of degree smaller or equal to � *<sup>m</sup>* 2 � :

$$A(X) = A\_0 + A\_1 X + \dots \\ A\_{\lfloor \frac{\mathfrak{g}}{2} \rfloor - 1} + X^{\lfloor \frac{\mathfrak{g}}{2} \rfloor} \left( A\_{\lfloor \frac{\mathfrak{g}}{2} \rfloor} + A\_{\lfloor \frac{\mathfrak{g}}{2} \rfloor + 1} X + \dots \\ A\_m X^{\lfloor \frac{\mathfrak{g}}{2} \rfloor} \right),$$

$$= \widetilde{A\_0} + Y \widetilde{A\_1}, \text{ where we denote } Y = X^{\lfloor \frac{\mathfrak{g}}{2} \rfloor}.$$

Then, we apply the Karatsuba's multiplication to the two parts. Each of the three multiplications can also be done using the Karatsuba's multiplication. The recursive application of Karatsuba's multiplication is the most efficient method for the computation of polynomials of degree a power of 2. The asymptotic complexity of Karatsuba's multiplication is *O*(*m*log2(3)) multiplications and *O*(*m*) additions, with *m* being the degree of the polynomials we want to multiply.

#### *5.4.2. Toom Cook 3 multiplication*

22 Theory and Practice of Cryptography and Network Security Protocols and Technologies

*<sup>X</sup>*<sup>2</sup> <sup>−</sup> ((*A*0+*A*1)(*B*0+*B*1)−*A*0*B*0)

<sup>2</sup>*X*(*<sup>X</sup>* <sup>−</sup>1),

*c*<sup>0</sup> = *A*<sup>0</sup> ×*B*0,

*c*<sup>2</sup> = *A*<sup>1</sup> ×*B*1,

For polynomials of degree 1, the complexity of Karatsuba's multiplication is 3*Mp* +4*Ap*.

<sup>2</sup> ⌋−<sup>1</sup> <sup>+</sup>*X*⌊ *<sup>m</sup>*

�1, where we denote *<sup>Y</sup>* <sup>=</sup> *<sup>X</sup>*⌊ *<sup>m</sup>*

additions, with *m* being the degree of the polynomials we want to multiply.

(*X*−0) <sup>−</sup>((*A*<sup>0</sup> <sup>+</sup>*A*1)(*B*<sup>0</sup> <sup>+</sup>*B*1)−*A*0*B*0)

= *A*0*B*<sup>0</sup> + ((*A*<sup>0</sup> +*A*1)(*B*<sup>0</sup> +*B*1)−*A*0*B*0)*X* +*A*1*B*1*X*(*X* −1), = *A*0*B*<sup>0</sup> + ((*A*<sup>0</sup> +*A*1)(*B*<sup>0</sup> +*B*1)−*A*0*B*<sup>0</sup> −*A*1*B*1)*X* +*A*1*B*1*X*2.

We can resume the computation of the polynomial *C*(*X*) using Karatsuba's multiplication by the

*c*<sup>1</sup> = (*A*<sup>0</sup> +*A*1)×(*B*<sup>0</sup> +*B*1),

*C*(*X*) = *c*<sup>0</sup> + (*c*<sup>1</sup> −*c*<sup>0</sup> −*c*2)*X* +*c*2*X*2.

The Karatsuba's multiplication can be recursively applied for polynomials of degree greater than 1. Let *<sup>A</sup>*(*X*) = *<sup>A</sup>*<sup>0</sup> <sup>+</sup>*A*1*<sup>X</sup>* <sup>+</sup>...*AmX<sup>m</sup>*, we can split *<sup>A</sup>*(*X*) in two parts of degree smaller or equal to � *<sup>m</sup>*

> 2 ⌋ � *A*⌊ *<sup>m</sup>*

Then, we apply the Karatsuba's multiplication to the two parts. Each of the three multiplications can also be done using the Karatsuba's multiplication. The recursive application of Karatsuba's multiplication is the most efficient method for the computation of polynomials of degree a power of 2. The asymptotic complexity of Karatsuba's multiplication is *O*(*m*log2(3)) multiplications and *O*(*m*)

<sup>2</sup> ⌋ <sup>+</sup>*A*⌊ *<sup>m</sup>*

2 ⌋.

<sup>2</sup> ⌋+1*<sup>X</sup>* <sup>+</sup>...*AmX*⌊ *<sup>m</sup>*

*<sup>X</sup>* mod(*X* −∞),

� 1

(*X*−1) mod(*<sup>X</sup>* <sup>−</sup>∞),

(11)

2 � :

2 ⌋ � ,

*c*′

*c*′

*c*′ <sup>2</sup> = �

following equation

= �

<sup>0</sup> = *<sup>C</sup>*(0), = *A*0*B*0,

<sup>1</sup> = (*C*(1)−*c*′

= *<sup>A</sup>*1*B*1*X*<sup>2</sup>

= *A*1*B*1.

(*C*(∞)−*c*′

*<sup>X</sup>*<sup>2</sup> <sup>−</sup> *<sup>A</sup>*0*B*<sup>0</sup>

*C*(*X*) = *c*′

<sup>0</sup>) <sup>1</sup> (1−0), = (*A*<sup>0</sup> +*A*1)(*B*<sup>0</sup> +*B*1)−*A*0*B*0,

(*A*1*B*1*X*<sup>2</sup> −*A*0*B*0) <sup>1</sup>

<sup>0</sup>) <sup>1</sup> (∞−0) <sup>−</sup>*c*′ 1 � 1 (∞−1),

<sup>0</sup> <sup>+</sup>*c*′

 

*A*(*X*) = *A*<sup>0</sup> +*A*1*X* +...*A*⌊ *<sup>m</sup>*

= *A* �<sup>0</sup> <sup>+</sup>*YA* <sup>1</sup>*<sup>X</sup>* <sup>+</sup>*c*′

Exactly like the Karatsuba's multiplication, Toom Cook 3 multiplication is an application of Newton's interpolation. The Toom Cook 3 method provide the product of polynomials of degree 2 with 5 multiplications of coefficients, instead of 9 using the school book method multiplication. The values for the interpolation are {0, 1,−1, 2,∞}. Unlike the Karatsuba's method, there are few multiplications and divisions by constants that we cannot avoid.

Let *A*(*X*) = *A*<sup>0</sup> + *A*1*X* + *A*2*X*<sup>2</sup> and *B*(*X*) = *B*<sup>0</sup> + *B*1*X* + *B*2*X*<sup>2</sup> be polynomials of degree 2 and *C*(*X*) = *A*(*X*) × *B*(*X*) obtained using the Toom Cook method. The evaluation part of Toom Cook 3 multiplication involves 10 additions of *Ai* and *Bi*, for *i* = 0, 1, 2. The evaluation of *A*(*X*) needs 5 additions.

$$\begin{cases} \begin{aligned} A(0) &= A\_0, \\ Sp\_1 &= A\_0 + A\_2, \\ A(1) &= Sp\_1 + A\_1, \\ A(-1) &= Sp\_1 - A\_1, \\ A(2) &= A\_0 + 2A\_1 + 4A\_2, \\ A(\ast \nu) &= A\_2 X^2 \bmod(X - \ast). \end{aligned} \end{cases}$$

We begin with the evaluation of *C*(*X*) in the α*<sup>i</sup>* pour *i* = 0, 1, 2, 3, 4.

$$\begin{cases} \begin{aligned} C(0) &= A(0) \times B(0) = A\_0 B\_0, \\ C(1) &= A(1) \times B(1), \\ C(-1) &= A(-1) \times B(-1), \\ C(2) &= A(2) \times B(2), \\ C(\Leftrightarrow) &= A(\Leftrightarrow) \times B(\Leftrightarrow) = A\_2 B\_2 X^4 \bmod(X - \Leftrightarrow). \end{aligned} \end{cases}$$

We apply the Newton's method to find the coefficients *c*′ *i*

$$\begin{cases} c\_0' = C(0), \\ c\_1' = C(1) - c\_0', \\ c\_2' = \frac{1}{2} \left( C(-1) - c\_0' + c\_1' \right), \\ c\_3' = \frac{1}{6} C(2) - \frac{1}{6} c\_0' - \frac{1}{3} c\_1' - \frac{1}{3} c\_2', \\ c\_4' = A\_2 B\_2. \end{cases}$$

The reconstruction of *C*(*X*) is then

$$C(X) = c\_0' + c\_1'X + c\_2'X(X-1) + c\_3'X(X-1)(X+1)$$

$$c\_4'X(X-1)(X+1)(-2).$$

This step can be resume by the formula

$$\begin{array}{l} C(X) = c\_0' + (c\_1' - c\_2' - c\_3' - 2c\_4')X + (c\_2' - c\_4')X^2, \\ \quad + (c\_3' - 2c\_4')X^3 + c\_4'X^4. \end{array}$$

10.5772/56295

75

http://dx.doi.org/10.5772/56295

= (*a*<sup>0</sup> +*a*<sup>1</sup> +*a*<sup>2</sup> +*a*<sup>3</sup> +*a*4)(*b*<sup>0</sup> +*b*<sup>1</sup> +*b*<sup>2</sup> +*b*<sup>3</sup> +*b*4)(*X*<sup>5</sup> −*X*<sup>4</sup> +*X*3) +(*a*<sup>0</sup> −*a*<sup>2</sup> −*a*<sup>3</sup> −*a*4)(*b*<sup>0</sup> −*b*<sup>2</sup> −*b*<sup>3</sup> −*b*4)(*X*<sup>6</sup> −2*X*<sup>5</sup> +2*X*<sup>4</sup> −*X*3) +(*a*<sup>0</sup> +*a*<sup>1</sup> +*a*<sup>2</sup> −*a*4)(*b*<sup>0</sup> +*b*<sup>1</sup> +*b*<sup>2</sup> −*b*4)(−*X*<sup>5</sup> +2*X*<sup>4</sup> −2*X*<sup>3</sup> +*X*2)

Efficient Computation for Pairing Based Cryptography: A State of the Art

+(*a*<sup>0</sup> +*a*<sup>1</sup> −*a*<sup>3</sup> −*a*4)(*b*<sup>0</sup> +*b*<sup>1</sup> −*b*<sup>3</sup> −*b*4)(*X*<sup>5</sup> −2*X*<sup>4</sup> +*X*3)

+(*a*<sup>0</sup> −*a*<sup>2</sup> −*a*3)(*b*<sup>0</sup> −*b*<sup>2</sup> −*b*3)(−*X*<sup>6</sup> +2*X*<sup>5</sup> −*X*4) +(*a*<sup>1</sup> +*a*<sup>2</sup> −*a*4)(*b*<sup>1</sup> +*b*<sup>2</sup> −*b*4)(−*X*<sup>4</sup> +2*X*<sup>3</sup> −*X*2)

+(*a*<sup>0</sup> −*a*4)(*b*<sup>0</sup> −*b*4)(−*X*<sup>6</sup> +3*X*<sup>5</sup> −4*X*<sup>4</sup> +3*X*<sup>3</sup> −*X*2) +*a*4*b*4(*X*<sup>8</sup> −*X*<sup>7</sup> +*X*<sup>6</sup> −2*X*<sup>5</sup> +3*X*<sup>4</sup> −3*X*<sup>3</sup> +*X*2)

The cost of these computations is 13*Mq* + 22*Aq*. Note that in order to recover the final expression of the polynomial of degree 8, we have to re-organize the 13 lines to find its coefficients. We denote the products on each of the 13 lines by *ui*, 0 ≤ *i* ≤ 12 (i.e. *u*<sup>12</sup> = (*a*<sup>0</sup> +*a*<sup>1</sup> +*a*<sup>2</sup> +*a*<sup>3</sup> +*a*4)(*b*<sup>0</sup> +*b*<sup>1</sup> +*b*<sup>2</sup> + *b*<sup>3</sup> +*b*4), *u*<sup>11</sup> = (*a*<sup>0</sup> −*a*<sup>2</sup> −*a*<sup>3</sup> −*a*4)(*b*<sup>0</sup> −*b*<sup>2</sup> −*b*<sup>3</sup> −*b*4) etc.) By re-arranging the formula in function of

Considering this expression, hidden additions must be taken in account. Once every simplification is

In [30], the Newton's interpolation gives a better result for the multiplication of 5-terms polynomials. The interpolation values are α<sup>0</sup> = 0, α<sup>1</sup> = 1, α<sup>2</sup> = −1, α<sup>3</sup> = 2, α<sup>4</sup> = −2, α<sup>5</sup> = 4, α<sup>6</sup> = −4, α<sup>7</sup> = 3, α<sup>8</sup> = ∞. With these values, the evaluations of *A* and *B* are only composed of shifts and additions. Details are provide in [30], the evaluations of *A*(*X*) and *B*(*X*) have a total complexity of 48*Ap*. The evaluation

few divisions by 3, 5 and 7 that appear in the formula Section 5.3.2. To avoid the computation of a division which is an expensive operation over a finite field, using a trick on the binary decomposition of integers, they perform very efficiently the divisions. The complexity for these divisions is smaller than

*i*

s is not straightforward. Indeed, there are

+(*a*<sup>3</sup> +*a*4)(*b*<sup>3</sup> +*b*4)(*X*<sup>7</sup> −*X*<sup>6</sup> +*X*<sup>4</sup> −*X*3) +(*a*<sup>0</sup> +*a*1)(*b*<sup>0</sup> +*b*1)(−*X*<sup>5</sup> +*X*<sup>4</sup> −*X*<sup>2</sup> +*X*)

+*a*0*b*0(*X*<sup>6</sup> −3*X*<sup>5</sup> +3*X*<sup>4</sup> −2*X*<sup>3</sup> +*X*<sup>2</sup> −*X* +1).

+*a*3*b*3(−*X*<sup>7</sup> +2*X*<sup>6</sup> −2*X*<sup>5</sup> +*X*4) +*a*1*b*1(*X*<sup>4</sup> −2*X*<sup>3</sup> +2*X*<sup>2</sup> −*X*)

the degree of *X*, we obtain the following expression for *C*

+ (−3*u*<sup>0</sup> −2*u*<sup>2</sup> −2*u*<sup>3</sup> +3*u*<sup>4</sup> −*u*<sup>5</sup> +2*u*<sup>8</sup> +*u*<sup>9</sup> −*u*<sup>10</sup> −2*u*<sup>11</sup> +*u*12)*X*<sup>5</sup>

+ (−2*u*<sup>0</sup> −2*u*<sup>1</sup> −3*u*<sup>3</sup> +3*u*<sup>4</sup> −*u*<sup>6</sup> +2*u*<sup>7</sup> +*u*<sup>9</sup> −2*u*<sup>10</sup> −*u*<sup>11</sup> +*u*12)*X*<sup>3</sup>

done, the total complexity of Montgomery's method is 13*Mp* +62*Ap*.

of *<sup>C</sup>*(*X*) in the <sup>α</sup>*i*s costs 9*Mp*. The computation of the *<sup>c</sup>*′

+ (3*u*<sup>0</sup> +*u*<sup>1</sup> +*u*<sup>2</sup> +3*u*<sup>3</sup> −4*u*<sup>4</sup> +*u*<sup>5</sup> +*u*<sup>6</sup> −*u*<sup>7</sup> −*u*<sup>8</sup> −2*u*<sup>9</sup> +2*u*<sup>10</sup> +2*u*<sup>11</sup> −*u*12)*X*<sup>4</sup>

+ (*u*<sup>0</sup> +2*u*<sup>2</sup> +*u*<sup>3</sup> −*u*<sup>4</sup> −*u*<sup>6</sup> −*u*<sup>8</sup> +*u*11)*X*<sup>6</sup>

+ (*u*<sup>0</sup> +2*u*<sup>1</sup> +*u*<sup>3</sup> −*u*<sup>4</sup> −*u*<sup>5</sup> −*u*<sup>7</sup> +*u*10)*X*<sup>2</sup>

*C* = *u*3*X*<sup>8</sup>

+*u*0.

+ (−*u*<sup>2</sup> −*u*<sup>3</sup> +*u*6)*X*<sup>7</sup>

+ (−*u*<sup>0</sup> −*u*<sup>1</sup> +*u*5)*X*

Which gives

$$\begin{cases} \begin{aligned} C\_0 &= c\_0', \\ C\_1 &= c\_1' - c\_2' - c\_3' - 2c\_4', \\ C\_2 &= c\_2' - c\_4', \\ C\_3 &= c\_3' - 2c\_4', \\ C\_4 &= c\_4', \\ C(X) &= C\_0 + C\_1 X + C\_2 X^2 + C\_3 X^3 + C\_4 X^4. \end{aligned} \end{cases}$$

For polynomials of degree 2, the complexity of Toom Cook 3 is 5*Mp* + 11*CMp* + 11*Ap*. As for Karatsuba's method, the Toom Cook 3 method can be recursively applied. The asymptotic complexity of Toom Cook 3 multiplication is *O*(*m*log3(5)) multiplications and *O*(*m*) additions, where *m* is the degree of the polynomials we want to multiply.

#### *5.4.3. Extensions to other extensions*

The Toom Cook 3 method can be extended to Toom Cook 5, this multiplication is suited for polynomials of degree 3. Few works deal with the multiplication of polynomials of degree greater than 3. For polynomials of degree 4, we can use the Karatsuba's method. As a consequence, in pairing based cryptography, field with extension degree of the form 2*<sup>i</sup>* 3*<sup>j</sup>* are called pairing friendly because we can use tower fields and for each stage of the tower we use the Karatsuba or Toom Cook 3 multiplication. However in pairing based cryptography (and in cryptography in general) there are some cases where it is more interesting to use fields with degree extensions different from 2 and 3. We can cite the problem of compression (i.e. representing elements in a finite field subgroup with fewer bits than classical algorithms) for extension fields in terms of *algebraic tori Tn*(**F***q*) [63] or applications based on *T*30(**F***q*), such as El Gamal encryption, El Gamal signatures and voting schemes in [69].

Let **F***<sup>p</sup>* be a finite field of characteristic greater than 5. For instance for polynomials of degree 5, we can begin with Karatsuba's method and then use Karatsuba and Toom Cook 3 for each part. This construction gives an efficient multiplication for polynomials of degree 5, but not the most efficient. For degree 5 extensions, Montgomery [58] has proposed a Karatsuba-like formula for 5-terms polynomials performed using 13 base field multiplications. This work was improved by El Mrabet et all in [30] using Newton's interpolation.

We recall here Montgomery's method for an extension of degree 5. Let *A* = *a*<sup>0</sup> + *a*1*X* + *a*2*X*<sup>2</sup> + *a*3*X*<sup>3</sup> +*a*4*X*<sup>4</sup> and *B* = *b*<sup>0</sup> +*b*1*X* +*b*2*X*<sup>2</sup> +*b*3*X*<sup>3</sup> +*b*4*X*<sup>4</sup> in **F***p*<sup>5</sup> with coefficients over **F***p*. Montgomery constructs the polynomial *C*(*X*) = *A*(*X*)·*B*(*X*) using the following formula *C* = (*a*<sup>0</sup> +*a*1*X* +*a*2*X*<sup>2</sup> + *a*3*X*<sup>3</sup> +*a*4*X*4)(*b*<sup>0</sup> +*b*1*X* +*b*2*X*<sup>2</sup> +*b*3*X*<sup>3</sup> +*b*4*X*4)

= (*a*<sup>0</sup> +*a*<sup>1</sup> +*a*<sup>2</sup> +*a*<sup>3</sup> +*a*4)(*b*<sup>0</sup> +*b*<sup>1</sup> +*b*<sup>2</sup> +*b*<sup>3</sup> +*b*4)(*X*<sup>5</sup> −*X*<sup>4</sup> +*X*3) +(*a*<sup>0</sup> −*a*<sup>2</sup> −*a*<sup>3</sup> −*a*4)(*b*<sup>0</sup> −*b*<sup>2</sup> −*b*<sup>3</sup> −*b*4)(*X*<sup>6</sup> −2*X*<sup>5</sup> +2*X*<sup>4</sup> −*X*3) +(*a*<sup>0</sup> +*a*<sup>1</sup> +*a*<sup>2</sup> −*a*4)(*b*<sup>0</sup> +*b*<sup>1</sup> +*b*<sup>2</sup> −*b*4)(−*X*<sup>5</sup> +2*X*<sup>4</sup> −2*X*<sup>3</sup> +*X*2) +(*a*<sup>0</sup> +*a*<sup>1</sup> −*a*<sup>3</sup> −*a*4)(*b*<sup>0</sup> +*b*<sup>1</sup> −*b*<sup>3</sup> −*b*4)(*X*<sup>5</sup> −2*X*<sup>4</sup> +*X*3) +(*a*<sup>0</sup> −*a*<sup>2</sup> −*a*3)(*b*<sup>0</sup> −*b*<sup>2</sup> −*b*3)(−*X*<sup>6</sup> +2*X*<sup>5</sup> −*X*4) +(*a*<sup>1</sup> +*a*<sup>2</sup> −*a*4)(*b*<sup>1</sup> +*b*<sup>2</sup> −*b*4)(−*X*<sup>4</sup> +2*X*<sup>3</sup> −*X*2) +(*a*<sup>3</sup> +*a*4)(*b*<sup>3</sup> +*b*4)(*X*<sup>7</sup> −*X*<sup>6</sup> +*X*<sup>4</sup> −*X*3) +(*a*<sup>0</sup> +*a*1)(*b*<sup>0</sup> +*b*1)(−*X*<sup>5</sup> +*X*<sup>4</sup> −*X*<sup>2</sup> +*X*) +(*a*<sup>0</sup> −*a*4)(*b*<sup>0</sup> −*b*4)(−*X*<sup>6</sup> +3*X*<sup>5</sup> −4*X*<sup>4</sup> +3*X*<sup>3</sup> −*X*2) +*a*4*b*4(*X*<sup>8</sup> −*X*<sup>7</sup> +*X*<sup>6</sup> −2*X*<sup>5</sup> +3*X*<sup>4</sup> −3*X*<sup>3</sup> +*X*2) +*a*3*b*3(−*X*<sup>7</sup> +2*X*<sup>6</sup> −2*X*<sup>5</sup> +*X*4) +*a*1*b*1(*X*<sup>4</sup> −2*X*<sup>3</sup> +2*X*<sup>2</sup> −*X*) +*a*0*b*0(*X*<sup>6</sup> −3*X*<sup>5</sup> +3*X*<sup>4</sup> −2*X*<sup>3</sup> +*X*<sup>2</sup> −*X* +1).

24 Theory and Practice of Cryptography and Network Security Protocols and Technologies

<sup>0</sup> + (*c*′

*<sup>C</sup>*<sup>0</sup> <sup>=</sup> *<sup>c</sup>*′ 0, *<sup>C</sup>*<sup>1</sup> <sup>=</sup> *<sup>c</sup>*′

*<sup>C</sup>*<sup>2</sup> <sup>=</sup> *<sup>c</sup>*′

*<sup>C</sup>*<sup>3</sup> <sup>=</sup> *<sup>c</sup>*′

*<sup>C</sup>*<sup>4</sup> <sup>=</sup> *<sup>c</sup>*′ 4,

+ (*c*′

<sup>1</sup> <sup>−</sup>*c*′ <sup>2</sup> <sup>−</sup>*c*′

<sup>3</sup> <sup>−</sup>2*c*′

<sup>1</sup> <sup>−</sup>*c*′ <sup>2</sup> <sup>−</sup>*c*′

<sup>2</sup> <sup>−</sup>*c*′ 4,

<sup>3</sup> <sup>−</sup>2*c*′ 4, <sup>3</sup> <sup>−</sup>2*c*′

4)*X*<sup>3</sup> <sup>+</sup>*c*′

<sup>3</sup> <sup>−</sup>2*c*′ 4,

*C*(*X*) = *C*<sup>0</sup> +*C*1*X* +*C*2*X*<sup>2</sup> +*C*3*X*<sup>3</sup> +*C*4*X*4.

For polynomials of degree 2, the complexity of Toom Cook 3 is 5*Mp* + 11*CMp* + 11*Ap*. As for Karatsuba's method, the Toom Cook 3 method can be recursively applied. The asymptotic complexity of Toom Cook 3 multiplication is *O*(*m*log3(5)) multiplications and *O*(*m*) additions, where *m* is the degree

The Toom Cook 3 method can be extended to Toom Cook 5, this multiplication is suited for polynomials of degree 3. Few works deal with the multiplication of polynomials of degree greater than 3. For polynomials of degree 4, we can use the Karatsuba's method. As a consequence, in pairing based

use tower fields and for each stage of the tower we use the Karatsuba or Toom Cook 3 multiplication. However in pairing based cryptography (and in cryptography in general) there are some cases where it is more interesting to use fields with degree extensions different from 2 and 3. We can cite the problem of compression (i.e. representing elements in a finite field subgroup with fewer bits than classical algorithms) for extension fields in terms of *algebraic tori Tn*(**F***q*) [63] or applications based

Let **F***<sup>p</sup>* be a finite field of characteristic greater than 5. For instance for polynomials of degree 5, we can begin with Karatsuba's method and then use Karatsuba and Toom Cook 3 for each part. This construction gives an efficient multiplication for polynomials of degree 5, but not the most efficient. For degree 5 extensions, Montgomery [58] has proposed a Karatsuba-like formula for 5-terms polynomials performed using 13 base field multiplications. This work was improved by El Mrabet et all in [30] using

We recall here Montgomery's method for an extension of degree 5. Let *A* = *a*<sup>0</sup> + *a*1*X* + *a*2*X*<sup>2</sup> + *a*3*X*<sup>3</sup> +*a*4*X*<sup>4</sup> and *B* = *b*<sup>0</sup> +*b*1*X* +*b*2*X*<sup>2</sup> +*b*3*X*<sup>3</sup> +*b*4*X*<sup>4</sup> in **F***p*<sup>5</sup> with coefficients over **F***p*. Montgomery constructs the polynomial *C*(*X*) = *A*(*X*)·*B*(*X*) using the following formula *C* = (*a*<sup>0</sup> +*a*1*X* +*a*2*X*<sup>2</sup> +

on *T*30(**F***q*), such as El Gamal encryption, El Gamal signatures and voting schemes in [69].

<sup>4</sup>)*<sup>X</sup>* + (*c*′

<sup>4</sup>*X*4.

<sup>2</sup> <sup>−</sup>*c*′ 4)*X*<sup>2</sup>

3*<sup>j</sup>* are called pairing friendly because we can

*C*(*X*) = *c*′

 

cryptography, field with extension degree of the form 2*<sup>i</sup>*

*a*3*X*<sup>3</sup> +*a*4*X*4)(*b*<sup>0</sup> +*b*1*X* +*b*2*X*<sup>2</sup> +*b*3*X*<sup>3</sup> +*b*4*X*4)

of the polynomials we want to multiply.

*5.4.3. Extensions to other extensions*

Newton's interpolation.

Which gives

The cost of these computations is 13*Mq* + 22*Aq*. Note that in order to recover the final expression of the polynomial of degree 8, we have to re-organize the 13 lines to find its coefficients. We denote the products on each of the 13 lines by *ui*, 0 ≤ *i* ≤ 12 (i.e. *u*<sup>12</sup> = (*a*<sup>0</sup> +*a*<sup>1</sup> +*a*<sup>2</sup> +*a*<sup>3</sup> +*a*4)(*b*<sup>0</sup> +*b*<sup>1</sup> +*b*<sup>2</sup> + *b*<sup>3</sup> +*b*4), *u*<sup>11</sup> = (*a*<sup>0</sup> −*a*<sup>2</sup> −*a*<sup>3</sup> −*a*4)(*b*<sup>0</sup> −*b*<sup>2</sup> −*b*<sup>3</sup> −*b*4) etc.) By re-arranging the formula in function of the degree of *X*, we obtain the following expression for *C*

*C* = *u*3*X*<sup>8</sup> + (−*u*<sup>2</sup> −*u*<sup>3</sup> +*u*6)*X*<sup>7</sup> + (*u*<sup>0</sup> +2*u*<sup>2</sup> +*u*<sup>3</sup> −*u*<sup>4</sup> −*u*<sup>6</sup> −*u*<sup>8</sup> +*u*11)*X*<sup>6</sup> + (−3*u*<sup>0</sup> −2*u*<sup>2</sup> −2*u*<sup>3</sup> +3*u*<sup>4</sup> −*u*<sup>5</sup> +2*u*<sup>8</sup> +*u*<sup>9</sup> −*u*<sup>10</sup> −2*u*<sup>11</sup> +*u*12)*X*<sup>5</sup> + (3*u*<sup>0</sup> +*u*<sup>1</sup> +*u*<sup>2</sup> +3*u*<sup>3</sup> −4*u*<sup>4</sup> +*u*<sup>5</sup> +*u*<sup>6</sup> −*u*<sup>7</sup> −*u*<sup>8</sup> −2*u*<sup>9</sup> +2*u*<sup>10</sup> +2*u*<sup>11</sup> −*u*12)*X*<sup>4</sup> + (−2*u*<sup>0</sup> −2*u*<sup>1</sup> −3*u*<sup>3</sup> +3*u*<sup>4</sup> −*u*<sup>6</sup> +2*u*<sup>7</sup> +*u*<sup>9</sup> −2*u*<sup>10</sup> −*u*<sup>11</sup> +*u*12)*X*<sup>3</sup> + (*u*<sup>0</sup> +2*u*<sup>1</sup> +*u*<sup>3</sup> −*u*<sup>4</sup> −*u*<sup>5</sup> −*u*<sup>7</sup> +*u*10)*X*<sup>2</sup> + (−*u*<sup>0</sup> −*u*<sup>1</sup> +*u*5)*X* +*u*0.

Considering this expression, hidden additions must be taken in account. Once every simplification is done, the total complexity of Montgomery's method is 13*Mp* +62*Ap*.

In [30], the Newton's interpolation gives a better result for the multiplication of 5-terms polynomials. The interpolation values are α<sup>0</sup> = 0, α<sup>1</sup> = 1, α<sup>2</sup> = −1, α<sup>3</sup> = 2, α<sup>4</sup> = −2, α<sup>5</sup> = 4, α<sup>6</sup> = −4, α<sup>7</sup> = 3, α<sup>8</sup> = ∞. With these values, the evaluations of *A* and *B* are only composed of shifts and additions. Details are provide in [30], the evaluations of *A*(*X*) and *B*(*X*) have a total complexity of 48*Ap*. The evaluation of *<sup>C</sup>*(*X*) in the <sup>α</sup>*i*s costs 9*Mp*. The computation of the *<sup>c</sup>*′ *i* s is not straightforward. Indeed, there are few divisions by 3, 5 and 7 that appear in the formula Section 5.3.2. To avoid the computation of a division which is an expensive operation over a finite field, using a trick on the binary decomposition of integers, they perform very efficiently the divisions. The complexity for these divisions is smaller than <sup>2</sup>*Ap*. The global complexity for the computation of the *<sup>c</sup>*′ *i* s is then 64*Ap*. Finally, the reconstruction of the polynomial *C*(*X*) using the Horner's scheme has a complexity of 28*Ap*. And the total complexity of the 5-terms polynomials is 9*Mq* +137*Aq*.

10.5772/56295

77

http://dx.doi.org/10.5772/56295

as by construction, the Projective and Jacobian coordinates substitute inversions in affine coordinates into multiplications. The fact that the affine coordinates involves inversions was a drawback to their use in pairing based cryptography. In [51], the authors analyzed the use of affine coordinates for pairing based cryptography. They adapt two known techniques for speeding up field inversion to the pairing based cryptography case. They found out that for high security levels, an implementation in affine coordinates of a pairing will be much faster than an implementation in projective coordinates. The first technique to improve the inversion consists in computing inverses in extension fields by using towers of extension field and transform inverse computation to subfield computations via the norm map. Using this technique, the authors reduce drastically the ratio of the costs of inversions to multiplications in extension fields. This is very interesting for the computation of pairings over a large extension field, typically at high level security such as 256 bits. The second trick is to take advantage of the inversion-sharing, a standard trick whenever several inversions are computed at once. This method involves the lecture of the binary expansion from right to left, instead of left to right. This second method is very interesting when multi-core processors are used, indeed, it can be easily parallelized. We can find in [51] detailed performance numbers with timing for base field and extension field arithmetic. For security level more reasonable, the Projective and Jacobian coordinates are for now more suitable. In [24], the authors resume, compare and improve several works dealing with the optimizations of pairings, considering all the possibilities for the Weierstrass equation. They give efficient computations

Efficient Computation for Pairing Based Cryptography: A State of the Art

in Jacobian and Projective coordinates. We resume there work in Table 6.

Twist deg. Result of [24]

**Table 6.** Comparaison of pairings considering Weierstrass models

There exists several model of elliptic curves, for instance

• Short Weierstrass: *y*<sup>2</sup> = *x*<sup>3</sup> +*ax*+*b*, for *a*, *b* in **K**. • Legendre coordinates: *y*<sup>2</sup> = *x*(*x*−1)(*x*−λ), for λ ∈ **K**.

• Montgomery: *by*<sup>2</sup> = *x*<sup>3</sup> +*ax*<sup>2</sup> +*x*, for *a*, *b* in **K**. • Edwards coordinates: *x*<sup>2</sup> +*y*<sup>2</sup> = *c*(1+*x*2*y*2) over **K**.

• Huff's coordinates: *aX*(*Y*<sup>2</sup> −*Z*2) = *bY* (*X*<sup>2</sup> −*Z*2) for *a*<sup>2</sup> �= *b*<sup>2</sup> �= 0 over **K**.

Curve Doubling Prev Doubling Curve order Addition Result Addition

*d* = 2, 4 New coord. Jacobian *y*<sup>2</sup> = *x*<sup>3</sup> +*c*<sup>2</sup> (2*k*/*d* +3)*Mp* +5*Sp* [23] (2*k*/*d* +3)*Mp* +5*Sp* 3|♯*E Mc* + (2*k*/*d* +3)*Mp* +5*Sp Mc* + (2*k*/*d* +3)*Mp* +5*Sp*

*d* = 2, 6 Projective Projective *y*<sup>2</sup> = *x*<sup>3</sup> +*b Mb* + (2*k*/*d* +2)*Mp* +7*Sp* [2] (2*k*/*d* +3)*Mp* +8*Sp* 3 ∤ ♯*E Mb* + (2*k*/*d* +2)*Mp* +7*Sp* (2*k*/*d* +3)*Mp* +8*Sp d* = 2, 6 Projective Jacobian *y*<sup>2</sup> = *x*<sup>3</sup> +*b Mb* + (*k* +6)*Mp* +7*Sp* [31] *Mb* + (2*k* +8)*Mp* +9*Sp* any (*k* +16)*Mp* +3*Sp* not reported *d* = 3 Projective Projective

*y*<sup>2</sup> = *x*<sup>3</sup> +*ax Ma* + (2*k*/*d* +2)*Mp* +8*Sp* [2] *Ma* + (2*k*/*d* +1)*Mp* +11*Sp* any (2*k*/*d* +12)*Mp* +*Sp* (2*k*/*d* +10)*Mp* +6*Sp*

The comparison with Montgomery's result is not evident, but implementations in [30] shows that the results are more efficient than the Montgomery's one.

In the two articles, the authors give also results for 6-terms and 7-terms polynomials.

The fact that we can compute efficiently the multiplication for extensions greater than 2 and 3 gives the opportunity to consider pairing computation over elliptic curve with an embedding degree *k* different from 2*<sup>i</sup>* 3*<sup>j</sup>* and can improve the implementation of pairings. But this work is still to be made.

#### 5.5. Original representation of finite fields

In the previous section we consider efficient multiplications for a classical representation of finite fields and extension of finite fields. But they are many ways to represent a finite field. In [22], the authors use an original representation of finite field to provide a very efficient implementation of a pairing. This original representation is the Residue Number System (RNS) representation and it was developed in [7, 8]. The RNS representation relays on the Chinese remainder theorem. Let B = {*m*1,...,*mn*} be a set of co-prime natural integers, *M* = *n* ∏ *i*=1 *mi* and 0 ≤ *X* < *M*. There exists a unique representation *X*<sup>B</sup> of *X* in the basis B, *X*<sup>B</sup> = {*X* mod *m*1,...*X* mod *mn*} = {*x*1,*x*2,...,*xn*}. Given *X*B, we can reconstruct *X* using the Chinese Remainder theorem:

$$X = \left(\sum\_{i=1}^{n} (x\_i \times b\_i^{-1} \mod m\_i) \times b\_i\right) \mod M,\text{ where } b\_i = \frac{M}{m\_i}.$$

The RNS representation is obviously very interesting for parallel computations. An efficient multiplication in RNS representation is described in [7, 8]. This multiplication is based on the Montgomery modular multiplication. In [22], the authors present two very efficient implementation of a pairing algorithm on an FPGA, in RNS representation. They implement the optimal Ate pairing at several security levels over Altera and Xilinx FPGA. They compare there result with previous work and obtaint very nice results.

#### 5.6. The arithmetic of Pairings

The complexity of a computation of a pairing depends on the finite field and the arithmetic underlying, but also of the model and the equation of the elliptic curve and the choice of the coordinates. Usually, an elliptic curve is represented using the short Weierstrass equation which is on the form *E* : *y*<sup>2</sup> = *x*<sup>3</sup> +*ax* +*b*, with *a* and *b* elements of the finite field **F***p*. In [20], Brier and Joye show that the value *a* can be chosen to be −3. This value contributes to improve the computation of pairings. But, even on a short Weierstrass equation, several cases exist, we can have *b* = 0, *a* = 0 with *b* a square or not just an integer. For each option, the coordinates have also an influence on the efficiency of the computation of a pairing. The coordinates are usually chosen between affine, Projective and Jacobian. The affine coordinates are often put aside. Indeed, the operations over the elliptic curve in affine coordinates involves inversion over finite fields. As inversion over a finite field is an expensive operation, one try to avoid them so far as possible. To achieve this aim, the Projective or Jacobian coordinates are suitable, as by construction, the Projective and Jacobian coordinates substitute inversions in affine coordinates into multiplications. The fact that the affine coordinates involves inversions was a drawback to their use in pairing based cryptography. In [51], the authors analyzed the use of affine coordinates for pairing based cryptography. They adapt two known techniques for speeding up field inversion to the pairing based cryptography case. They found out that for high security levels, an implementation in affine coordinates of a pairing will be much faster than an implementation in projective coordinates. The first technique to improve the inversion consists in computing inverses in extension fields by using towers of extension field and transform inverse computation to subfield computations via the norm map. Using this technique, the authors reduce drastically the ratio of the costs of inversions to multiplications in extension fields. This is very interesting for the computation of pairings over a large extension field, typically at high level security such as 256 bits. The second trick is to take advantage of the inversion-sharing, a standard trick whenever several inversions are computed at once. This method involves the lecture of the binary expansion from right to left, instead of left to right. This second method is very interesting when multi-core processors are used, indeed, it can be easily parallelized. We can find in [51] detailed performance numbers with timing for base field and extension field arithmetic. For security level more reasonable, the Projective and Jacobian coordinates are for now more suitable.

In [24], the authors resume, compare and improve several works dealing with the optimizations of pairings, considering all the possibilities for the Weierstrass equation. They give efficient computations in Jacobian and Projective coordinates. We resume there work in Table 6.


**Table 6.** Comparaison of pairings considering Weierstrass models

26 Theory and Practice of Cryptography and Network Security Protocols and Technologies

In the two articles, the authors give also results for 6-terms and 7-terms polynomials.

*n* ∏ *i*=1

(*xi* <sup>×</sup>*b*−<sup>1</sup>

*i*

the polynomial *C*(*X*) using the Horner's scheme has a complexity of 28*Ap*. And the total complexity

The comparison with Montgomery's result is not evident, but implementations in [30] shows that the

The fact that we can compute efficiently the multiplication for extensions greater than 2 and 3 gives the opportunity to consider pairing computation over elliptic curve with an embedding degree *k* different

In the previous section we consider efficient multiplications for a classical representation of finite fields and extension of finite fields. But they are many ways to represent a finite field. In [22], the authors use an original representation of finite field to provide a very efficient implementation of a pairing. This original representation is the Residue Number System (RNS) representation and it was developed in [7, 8]. The RNS representation relays on the Chinese remainder theorem. Let B = {*m*1,...,*mn*} be a

*X* in the basis B, *X*<sup>B</sup> = {*X* mod *m*1,...*X* mod *mn*} = {*x*1,*x*2,...,*xn*}. Given *X*B, we can reconstruct

The RNS representation is obviously very interesting for parallel computations. An efficient multiplication in RNS representation is described in [7, 8]. This multiplication is based on the Montgomery modular multiplication. In [22], the authors present two very efficient implementation of a pairing algorithm on an FPGA, in RNS representation. They implement the optimal Ate pairing at several security levels over Altera and Xilinx FPGA. They compare there result with previous work and

The complexity of a computation of a pairing depends on the finite field and the arithmetic underlying, but also of the model and the equation of the elliptic curve and the choice of the coordinates. Usually, an elliptic curve is represented using the short Weierstrass equation which is on the form *E* : *y*<sup>2</sup> = *x*<sup>3</sup> +*ax* +*b*, with *a* and *b* elements of the finite field **F***p*. In [20], Brier and Joye show that the value *a* can be chosen to be −3. This value contributes to improve the computation of pairings. But, even on a short Weierstrass equation, several cases exist, we can have *b* = 0, *a* = 0 with *b* a square or not just an integer. For each option, the coordinates have also an influence on the efficiency of the computation of a pairing. The coordinates are usually chosen between affine, Projective and Jacobian. The affine coordinates are often put aside. Indeed, the operations over the elliptic curve in affine coordinates involves inversion over finite fields. As inversion over a finite field is an expensive operation, one try to avoid them so far as possible. To achieve this aim, the Projective or Jacobian coordinates are suitable,

*<sup>i</sup>* mod *mi*)×*bi*

3*<sup>j</sup>* and can improve the implementation of pairings. But this work is still to be made.

s is then 64*Ap*. Finally, the reconstruction of

*mi* and 0 ≤ *X* < *M*. There exists a unique representation *X*<sup>B</sup> of

mod *<sup>M</sup>*, where *bi* <sup>=</sup> *<sup>M</sup>*

*mi* .

<sup>2</sup>*Ap*. The global complexity for the computation of the *<sup>c</sup>*′

results are more efficient than the Montgomery's one.

5.5. Original representation of finite fields

set of co-prime natural integers, *M* =

*X* using the Chinese Remainder theorem:

*X* =

obtaint very nice results.

5.6. The arithmetic of Pairings

 *n* ∑ *i*=1

of the 5-terms polynomials is 9*Mq* +137*Aq*.

from 2*<sup>i</sup>*

There exists several model of elliptic curves, for instance


Several works study the efficiency of an implementation of pairing over some of these models of elliptic curves. The Edwards elliptic curves were recently introduced in cryptographie. In [32], Edwards demonstrates that every elliptic curve *E* defined over an algebraic number field is birationally equivalent over some extension of that field to a curve given by the equation:

$$\mathbf{x}^2 + \mathbf{y}^2 = c^2(1 + \mathbf{x}^2\mathbf{y}^2). \tag{12}$$

10.5772/56295

79

http://dx.doi.org/10.5772/56295

[2] C. Arene, T. Lange, M. Naehrig and C. Ritzenthaler Faster pairing computation Cryptology

Efficient Computation for Pairing Based Cryptography: A State of the Art

[3] C. Arène and T. Lange and M. Naehrig and C. Ritzenhaler, Faster Pairing Computation of the Tate pairing, Cryptology ePrint Archive, Report 2009/155, http://eprint.iacr.org/2009/155, 2009

[4] J.C.Bajard and N.El Mrabet: Pairing in cryptography: an arithmetic point de view, In *Advanced Signal Processing Algorithms, Architectures and Implementations XVI*, part of SPIE, August 2007.

[5] Bajard J.C., Imbert L., Negre Ch.: Arithmetic Operations in Finite Fields of Medium Prime Characteristic Using the Lagrange Representation, IEEE Transactions on Computers, September

[6] Bajard, J.C., Meloni, N., Plantard, T.: Efficient RNS bases for Cryptography IMACS'05, Applied

[7] J-C. Bajard, L-S. Didier and P. Kornerup, Modular Multiplication and Base Extensions in Residue Number Systems, 15th IEEE Symposium on Computer Arithmetic (Arith-15 2001), p. 59–65,

[8] J-C. Bajard, L-S. Didier and P. Kornerup, An RNS Montgomery Modular Multiplication

[10] P.D. Barrett. Implementing the Rivest Shamir and Adleman Public Key Encryption Algorithm on a Standard Digital Signal Processor. In *Advances in Cryptology (CRYPTO)*, LNCS 263, 311–323,

[11] Barreto P., Lynn B., Scott M.: On the Selection of Pairing-Friendly Groups Selected Aeras in

[12] P.S.L.M. Barreto, S.D. Galbraith, C. O'hEigeartaigh and M. Scott. Efficient Pairing Computation on Supersingular Abelian Varieties, In *Designs, Codes and Cryptography*, 42 (3), 239–271, 2007.

[13] Barreto P., Kim H., Lynn B., Scott M.: Efficient algorithms for pairing-based cryptosystems, Advances in Cryptology CRYPTO 2002, Lecture Notes in Computer Science, 2442 (2002),

[14] Barreto P., Naehrig M.: Pairing-friendly elliptic curves of prime order, Selected Areas in

[15] Barreto P., Lynn B., Scott M.: Efficient implementation of pairing-based cryptosystems, Journal

[16] Barreto P., Scott M.: Compressed pairings, Advances in Cryptology | Crypto 2004, LNCS 3152,

[17] D. J. Bernstein and T. Lange Explicit-formulas database. http://www. hyperelliptic.org/EFD.

Algorithm, IEEE Trans. Computers, vol. 47, p. 766–776, 1998.

http://paginas.terra.com.br/informatica/paulobarreto/pblounge.html

ePrint Archive, Report 2009/155, 2009

2006 (Vol. 55, No. 9) p p. 1167-1177

Mathematics and Simulation, (2005)

[9] Barreto, P.: The Pairing-Based Crypto Lounge

Cryptography SAC 2003, LNCS 30006, 2004,17-25

Cryptography (SAC 2005), LNCS 3897 (2006), 319-331.

of Cryptology, 17 (2004), 321-334

140-156, http://eprint.iacr.org/2004/032.

2001.

1986.

354-368.

Edwards curves became interesting for elliptic curve cryptography when it was proven by Bernstein and Lange in [18] that they provide addition and doubling formulas faster than all addition formulas known at that time. The advantage of Edwards coordinates is that the addition law can be complete (i.e. the formulas for adding or doubling two points are the same) and thus the exponentiation in Edwards coordinates is naturally protected against side channel attacks. Recently, the Edwards elliptic curves were used to compute pairings [3, 44]. In [46], the authors study the Huff's model of an elliptic curve, they provide explicit formulae for fast doubling and addition and also for Tate pairing computation. Another example is the work in [72], in this work the authors consider the Selmer elliptic curves, they present formulae for doubling, addition and pairing computations. They compare there results to various elliptic curve models such as Weierstrass, Edwards, Hessian. There is many choices for the equation/model of the elliptic curve and of the coordinates, the website [17] regroups every new result on this subject. It is a very nice overview of this topic of research.

#### 6. Conclusions

We presented the various pairings available for cryptographic use. As the pairing are aimed to be implemented in smart cards, the efficiency of a pairing implementation is a subject of several research. We presented optimizations developed for the improvement of a pairing implementation. We introduced the twisted elliptic curve which leads to the denominator elimination. We constructed the extension field **F***pk* using tower fields and the method for an efficient multiplication over each step of the tower. We described efficient squaring method combine with the cyclotomic subgroup. We also highlighted the fact that the choice of the model of the elliptic curve and the choice of the coordinates is important for an efficient implementation. We saw that the representation of an element in the base field **F***<sup>p</sup>* with original definition can leads to very efficient implementation. To conclude, the optimizations of pairing are a very interesting point of research and a lot of scientists work hardly to find new optimizations. Further research can follow the presented optimizations and adapt to the case of pairings over hyperelliptic curves, or find any other point of optimizations in the implementation.

#### Author details

Nadia El Mrabet

LIASD, University Paris 8, Saint Denis, France

#### References

[1] Ahmadi O., Hankerson D., Menezes A.: Software Implementation of Arithmetic in **F**3*<sup>m</sup>* , WAIFI Conference, Madrid Spain 2007.


28 Theory and Practice of Cryptography and Network Security Protocols and Technologies

over some extension of that field to a curve given by the equation:

on this subject. It is a very nice overview of this topic of research.

curves, or find any other point of optimizations in the implementation.

LIASD, University Paris 8, Saint Denis, France

Conference, Madrid Spain 2007.

6. Conclusions

Author details

Nadia El Mrabet

References

Several works study the efficiency of an implementation of pairing over some of these models of elliptic curves. The Edwards elliptic curves were recently introduced in cryptographie. In [32], Edwards demonstrates that every elliptic curve *E* defined over an algebraic number field is birationally equivalent

Edwards curves became interesting for elliptic curve cryptography when it was proven by Bernstein and Lange in [18] that they provide addition and doubling formulas faster than all addition formulas known at that time. The advantage of Edwards coordinates is that the addition law can be complete (i.e. the formulas for adding or doubling two points are the same) and thus the exponentiation in Edwards coordinates is naturally protected against side channel attacks. Recently, the Edwards elliptic curves were used to compute pairings [3, 44]. In [46], the authors study the Huff's model of an elliptic curve, they provide explicit formulae for fast doubling and addition and also for Tate pairing computation. Another example is the work in [72], in this work the authors consider the Selmer elliptic curves, they present formulae for doubling, addition and pairing computations. They compare there results to various elliptic curve models such as Weierstrass, Edwards, Hessian. There is many choices for the equation/model of the elliptic curve and of the coordinates, the website [17] regroups every new result

We presented the various pairings available for cryptographic use. As the pairing are aimed to be implemented in smart cards, the efficiency of a pairing implementation is a subject of several research. We presented optimizations developed for the improvement of a pairing implementation. We introduced the twisted elliptic curve which leads to the denominator elimination. We constructed the extension field **F***pk* using tower fields and the method for an efficient multiplication over each step of the tower. We described efficient squaring method combine with the cyclotomic subgroup. We also highlighted the fact that the choice of the model of the elliptic curve and the choice of the coordinates is important for an efficient implementation. We saw that the representation of an element in the base field **F***<sup>p</sup>* with original definition can leads to very efficient implementation. To conclude, the optimizations of pairing are a very interesting point of research and a lot of scientists work hardly to find new optimizations. Further research can follow the presented optimizations and adapt to the case of pairings over hyperelliptic

[1] Ahmadi O., Hankerson D., Menezes A.: Software Implementation of Arithmetic in **F**3*<sup>m</sup>* , WAIFI

*x*<sup>2</sup> +*y*<sup>2</sup> = *c*2(1+*x*2*y*2). (12)


[18] D. J. Bernstein and T. Lange, Faster additions and doubling on elliptic curves, In Advances in cryptology - ASIACRYPT 2007, LNCS, vol. 4833, p. 29–50,2007

10.5772/56295

81

http://dx.doi.org/10.5772/56295

[33] D. Freeman, M. Scott and E. Teske, A Taxonomy of Pairing-Friendly Elliptic Curves, Journal of

Efficient Computation for Pairing Based Cryptography: A State of the Art

[34] Frey G., Müller M., Rück H.G.: The Tate Pairing and the Discrete Logarithm Applied to Elliptic

[35] Fleischmann P., Paar C., Soria-Rodriguez P.: Fast Arithmetic for Public-Key Algorithms in Galois Fields with Composite Exponents, IEEE Transactions on Computers, vol. 48, no. 10, pp.

[36] Frey G., Rück H.G.: A Remark Concerning m-divisibility Constructions and the Discrete Logarithmic problem in the Divisor Class Group of Curves, in Math. comp., 62, 865-874, 1994.

[37] Galbraith S.: K.G.: Pairings, Chapter IX, Advances in Elliptic Curve Cryptography, F. Blake and G. Seroussi and N. Smart editors, Series: London Mathematical Society Lecture Note Series (No.

[38] S.D. Galbraith, C. O'hEigeartaigh and C. Sheedy. Simplified Pairing Computation and Security

[39] S.D. Galbraith, F. Hess and F. Vercauteren. Aspects of Pairing Inversion In *IEEE Transactions on*

[40] Granger R., Page D., Stam M.: Hardware and Software Normal Basis Arithmetic for Pairing-Based Cryptography in Characteristic Three. IEEE Transactions on Computers,volume

[41] R. Granger and M. Scott, Faster Squaring in the Cyclotomic Subgroup of Sixth Degree Extensions, Practice and Theory in Public Key Cryptography 2010, LNCS vol. 6056, p. 209–223, 2010

[43] F. Hess, N. Smart and F. Vercauteren The Eta Pairing Revisited, IEEE Transactions on Information

[44] S. Ionica and A. Joux , Another Approach to Pairing Computation in Edwards Coordinates,

[45] Joux A.: one round protocol for tripartite Diffie-Hellman, Algorithmic Number Theory: Fourth International Symposium, Lecture Notes in Computer Science, 1838 (2000), 385-393. Full

[46] M. Joye, M. Tibouchi and D. Vergnaud Huff's Model for Elliptic Curve Algorithmic Number

[47] Koblitz N.: Elliptic curve cryptosystems, Mathematics of Computation, Vol. 48, 1987, 203-209.

[48] N. Koblitz and A. Menezes, Pairing-Based Cryptography at High Security Levels, Cryptography

Implications, In *Journal of Mathematical Cryptology*, 1 (3), 267–282, 2007.

[42] F. Hess, Pairing Lattices, Pairing 2008, LNCS vol. 5209, p. 18–38, 2008

INDOCRYPT '08, LNCS,vol. 5365, p. 400–413,2008

version: Journal of Cryptology, 17 (2004), 263-276

and Coding 2005, LNCS vol. 3796, p. 13–36, 2005

Theory (ANTS-IX), LNCS vol. 6197, p. 234–250, 2012

Curve Cryptosystems, IEEE Transactions Inf. Theory, 45, 1717-1719;1999

Cryptology, vol. 23, nˇr 2, p. 224–280, 2010

1025-1034, October, 1999.

54(7): 852–860, July 2005

Theory, vol. 52, p. 4595–4602, 2006

317),Cambridge University Press,2005

*Information Theory*, 54 (12), 5719–5728, 2008.


[33] D. Freeman, M. Scott and E. Teske, A Taxonomy of Pairing-Friendly Elliptic Curves, Journal of Cryptology, vol. 23, nˇr 2, p. 224–280, 2010

30 Theory and Practice of Cryptography and Network Security Protocols and Technologies

cryptology - ASIACRYPT 2007, LNCS, vol. 4833, p. 29–50,2007

Computing, 32, 586?615, 2003

vol. 2643, 2003, 43–50.

2011.

(1976), 644-654.

(2006)

2011.

[18] D. J. Bernstein and T. Lange, Faster additions and doubling on elliptic curves, In Advances in

[19] Boneh D., Franklin M.: Identity-based encryption from the Weil pairing. SIAM Journal of

[20] Brier E., Joye M.: Point multiplication on elliptic curves through isogenies, AAECC 2003, LNCS.,

[21] Blake F., Seroussi G., Smart N. (editors): Advances in Elliptic Curve Cryptography, Series: London Mathematical Society Lecture Note Series (No. 317),Cambridge University Press,2005

[22] R. C. C. Cheung, S. Duquesne, J. Fan and, N. Guillermin, I. Verbauwhede and G. Xiaoxu Yao, FPGA Implementation of Pairings Using Residue Number System and Lazy Reduction, Cryptographic Hardware and Embedded Systems - CHES 2011 LNCS vol. 6917, p. 421–441,

[23] C. Costello, H. Hisil, C. Boyd, J-M. Gonz?alez Nieto and K. Koon-Ho Wong. Faster pairings on

[24] C. Costello, T. Lange and M. Naehrig. Faster pairing computations on curves with high-degree twists. Progress in Cryptology ? PKC 2010 (13th International Conference on Practice and Theory

[25] Cohen, H., Frey, G. (editors): Handbook of elliptic and hyperelliptic curve cryptography. Discrete

[26] Diffie W., Hellman M.: directions in cryptography, IEEE Transactions on Information Theory, 22

[27] I. Duursma and H. Lee. Tate Pairing Implementation for Hyperelliptic Curves *y*<sup>2</sup> = *x<sup>p</sup>* −*x*+*d*. In *Advances in Cryptology (ASIACRYPT)*, Springer-Verlag LNCS 2894, 111–123, 2003.

[28] Duquesne S., Frey G. Background on Pairings, Chapter 6 of Cohen, H., Frey, G.: Handbook of elliptic and hyperelliptic curve cryptography. Discrete Math. Appl., Chapman & Hall/CRC (2006)

[29] Duquesne S., Frey G. Implementation of Pairings, Chapter 16 of Cohen, H., Frey, G.: Handbook of elliptic and hyperelliptic curve cryptography. Discrete Math. Appl., Chapman & Hall/CRC

[30] N. El Mrabet, A. Guillevic and S. Ionica, Efficient Multiplication in Finite Field Extensions of Degree 5. Progress in Cryptology-Africacrypt 2011, Springer-Verlag LNCS 6737, p. 188–205,

[31] N. El Mrabet, N. Guillermin and S. Ionica. A study of pairing computation for elliptic curves with

[32] H. Edwards , A normal Form for Elliptic Curve Bulletin of the American Mathematical Society,

embedding degree 15 Cryptology ePrint Archive, Report 2009/370, 2009.

special Weierstrass curves Pairing2009, LNCS, vol. 5671, p. 89–101, 2009

in Public Key Cryptography LNCS, vol. 6056, p. 224–242, 2010.

Math. Appl., Chapman & Hall/CRC (2006)

vol. 44, nˇr 3, p. 393–422, July 2007


[49] S. Kwon. Efficient Tate Pairing Computation for Supersingular Elliptic Curves over Binary Fields. In *Cryptology ePrint Archive*, Report 2004/303, 2004.

10.5772/56295

83

http://dx.doi.org/10.5772/56295

[65] Shamir A.: Identity BasedCryptosystems and Signature Schemes, Advances in Cryptology Crypto

Efficient Computation for Pairing Based Cryptography: A State of the Art

[66] M. Scott. Computing the Tate Pairing. In *Topics in Cryptology (CT-RSA)*, Springer-Verlag LNCS

[67] M. Scott, N. Benger, M. Charlemagne, L. J. Dominguez Perez and E. J. Kachisa, On the Final Exponentiation for Calculating Pairings on Ordinary Elliptic Curves, Pairing 2009, LNCS vol.

[68] M. Scott, N. Costigan and W. Abdulwahab. Implementing Cryptographic Pairings on Smartcards. In *Cryptographic Hardware and Embedded Systems (CHES)*, Springer-Verlag LNCS 4249,

[69] M. Van Dijk and R. Granger and D. Page and K. Rubin and A. Silverberg and M. Stam and D. Woodruff Practical cryptography in high dimensional tori, Progress in Cryptography

[70] F. Vercauteren, Optimal pairings, IEEE Trans. Inf. Theor., vol. 56, nˇr1,p. 455–461, Jan 2010

[71] C. Whelan and M. Scott. The Importance of the Final Exponentiation in Pairings When Considering Fault Attacks. In *Pairing-Based Cryptography*, Springer-Verlag LNCS 4575,

[72] L. Zhang, K. Wang, H. Wang and D. Ye, Another Elliptic Curve Model for Faster Pairing

Computation, ISPEC 2011, LNCS vol. 6672, p. 432–446, 2011.

'84, LNCS, Vol. 196, pp 47-53, 1984

3376, 293–304, 2005.

5671, p. 78–88,2009

134–147, 2006.

Eurocrypt'2005

225–246, 2007.


[65] Shamir A.: Identity BasedCryptosystems and Signature Schemes, Advances in Cryptology Crypto '84, LNCS, Vol. 196, pp 47-53, 1984

32 Theory and Practice of Cryptography and Network Security Protocols and Technologies

In *Cryptology ePrint Archive*, Report 2004/303, 2004.

318-332.

417-426 Vol. 218 of LNCS 1986.

Computers 2005, vol. 54, p. 362–369.

44, 519–521, 1985.

City University, 2006.

Pairing 2008 Egham UK, Pairing 2009 Palo Alto Ca, USA

vol. 106, 1992

1986 http://crypto.stanford.edu/miller/miller.pdf

[60] Pairing 2005 in Dublin Ireland, http://pic.computing.dcu.ie/

Pairing 2010 Yamanaka Hot Spring, Ishikawa, Japan

Series (No. 317),Cambridge University Press,2005

Springer-Verlag, LNCS 2729, p. 349–365, 2003.

Pairing 2007 in Tokyo Japan, http://www.pairing-conference.org/

Pairing 2012 Darmstadt, Germany http://2012.pairing-conference.org/

[61] PARI/GP, version 2.1.7, Bordeaux, 2005, http://pari.math.u-bordeaux.fr/

Computation, Pairing 2010, LNCS, vol. 6487, p. 1–20, 2010

[49] S. Kwon. Efficient Tate Pairing Computation for Supersingular Elliptic Curves over Binary Fields.

[51] K. Lauter, P. L. Montgomery and M. Naehrig An Analysis of Affine Coordinates for Pairing

[52] Lenstra A., Stam M.: Efficient Subgroup Exponentiation in Quadratic and Sixth Degree Extensions, Cryptographic Hardware and Embedded Systems, CHES 2002, LNCS 2523 pp.

[53] MenezesA., Okamoto T. and Vanstone S.A.: Reducing Elliptic Curve Logarithms to Logarithms

[55] Miller V.: Use of Elliptic Curves in Cryptography Advances in Cryptology-Crypto 85 pages

[56] Miller V.: Short Programs for Functions on Curves, IBM, Thomas J. Watson Research Center,

[57] P.L. Montgomery. Modular Multiplication Without Trial Division. *Mathematics of Computation*,

[58] P. L. Montgomery Five, Six and Seven-terms Karatsuba-Like Formulae, IEEE Transactions on

[59] C. O'hEigeartaigh. Pairing Computation on Hyperelliptic Curves of Genus 2. PhD Thesis, Dublin

[62] Paterson K.G.: Cryptography from Pairings, Chapter X, Advances in Elliptic Curve Cryptography, F. Blake and G. Seroussi and N. Smart editors, Series: London Mathematical Society Lecture Note

[63] K. Rubin and A. Silverberg, Torus-Based Cryptography, Advances in Cryptology CRYPTO 2003,

[64] J.H. Silverman The arithmetic of elliptic curves, Graduate Texts in Mathematics, Springer Verlag,

in a Finite Field, IEEE Trans. Inf . Theory 39, numéro 5, pages 1639-1646, 1993.

[54] Miller V.: The Weil pairing and its efficient calculation, J. Cryptology, 17 (2004), 235-261.

[50] R. Lidl and H. Niederreiter Finite Fields Cambridge University Press, 1994, 0-521-39231-4


**Chapter 4**

**A Double Cipher Scheme for Applications in Ad Hoc**

The ubiquitous environment was described as a vision for 21st century computing [1]. In the last ten years, the ubiquitous network has become one of the remarkable trends of information and communications technology. One of the most important characteristics of the ubiquitous network is open access anytime anywhere. This corresponds to the mobility and diversity of power conscious PC processors, mobile processors, cryptography processors, RFID tags, and so forth. In view of the desire for better cost performance, simplicity, functionality, usability, and so on, the ad hoc network is an emerging technology for next generation ubiquitous computing [2]. However, these specific features involve fundamental issues as follows.

**i.** Currently, the promotive force of diversity is not the conventional wired network

based on large servers, but convenient wireless LANs and handheld small devices, like the PDA (personal digital assistant), mobile phone and so forth. Although the diversity of various platforms is inevitable, it also causes notorious security issues such as insecurity, security threats or illegal attacks, such as tapping, intrusion and pretension. Nowadays, WEP (Wired Equivalent Privacy) is not so effective for wireless LANs. Worldwide diversity vs. the security threat are the two faces charac‐ teristic of the ubiquitous network [3]. Safety of the ubiquitous network has two aspects. One is the front-end security of the ubiquitous device and the other is the protection of the multimedia data itself, stored in the ubiquitous device. Neither approach always promises complete safety, but they complement one another. A cutting-edge technique for front-end security is the TPM (trusted platform module), commonly known as the security chip. Since the TPM implements RSA (Rivest-Shamir-Adelman), it works for short, password-size text data, and its major role is

> © 2013 Fukase; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Fukase; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

**Networks and its VLSI Implementations**

Additional information is available at the end of the chapter

Masa-aki Fukase

**1. Introduction**

http://dx.doi.org/10.5772/56145

## **A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations**

Masa-aki Fukase

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/56145

#### **1. Introduction**

The ubiquitous environment was described as a vision for 21st century computing [1]. In the last ten years, the ubiquitous network has become one of the remarkable trends of information and communications technology. One of the most important characteristics of the ubiquitous network is open access anytime anywhere. This corresponds to the mobility and diversity of power conscious PC processors, mobile processors, cryptography processors, RFID tags, and so forth. In view of the desire for better cost performance, simplicity, functionality, usability, and so on, the ad hoc network is an emerging technology for next generation ubiquitous computing [2]. However, these specific features involve fundamental issues as follows.

**i.** Currently, the promotive force of diversity is not the conventional wired network based on large servers, but convenient wireless LANs and handheld small devices, like the PDA (personal digital assistant), mobile phone and so forth. Although the diversity of various platforms is inevitable, it also causes notorious security issues such as insecurity, security threats or illegal attacks, such as tapping, intrusion and pretension. Nowadays, WEP (Wired Equivalent Privacy) is not so effective for wireless LANs. Worldwide diversity vs. the security threat are the two faces charac‐ teristic of the ubiquitous network [3]. Safety of the ubiquitous network has two aspects. One is the front-end security of the ubiquitous device and the other is the protection of the multimedia data itself, stored in the ubiquitous device. Neither approach always promises complete safety, but they complement one another. A cutting-edge technique for front-end security is the TPM (trusted platform module), commonly known as the security chip. Since the TPM implements RSA (Rivest-Shamir-Adelman), it works for short, password-size text data, and its major role is

© 2013 Fukase; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Fukase; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

implicitly digital signing. In view of the running time, the encryption of long-length multimedia data, such as an image, is definitely outside the TPM. The other approach, back-end security, protects huge amounts of data because multimedia information, crucial for the interaction between ubiquitous devices and human beings, uses massive amounts of data. Back-end security is usually covered by common key schemes. Common key module-embedded processors are built-in cryptography processors for IC cards and portable electronic devices.

or transposes a series of multimedia data at random without any special encryption operation. The other algorithm of the double cipher is a data sealing algorithm. This is implemented during the data transfer from the register file to the data cache, by using another built-in RNG. This complements the RAC's shortcoming and enhances the security of the data information

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

http://dx.doi.org/10.5772/56145

87

Since the double cipher scheme uses built-in RNGs at the micro-operation level, it is more effective than normal usage based on the processing of random number operands at the instruction level. In addition, the double cipher scheme requires no additional chip area or power dissipation. A linear feedback shift register (LFSR) is used as the RNG to achieve a longer cycle with negligible additional area. Thus, the double cipher scheme is a microarchi‐ tecture-based, software-transparent hardware cipher that offers security for the whole data with negligible hardware cost and moderate performance overhead. This is very suited to very large scale integration (VLSI) implementation. The VLSI implementation of the double cipher follows the multicore structure for bi-directional communication and multiple pipelines for multimedia processing and cipher streaming [5]. The cipher streaming is executed by the SIMD (Single Instruction stream Multiple Data stream) mode cipher and decipher codes. They do not attach operands, as is described above, but repeat instances to transfer byte-structured data

In this paper, we describe the double cipher scheme, hardware algorithm, architectural organization, structural aspect, internal behaviour and VLSI implementation of the double cipher in a sophisticated ubiquitous processor named HCgorilla by using a 0.18-μm standard cell CMOS chip. We evaluate the prospective specifications of the HCgorilla chip with respect to the hardware resource or cost, power dissipation, throughput and cipher strength. Potential aspects compared with the usual security techniques, cipher techniques and cryptography module-embedded processors are also described. HCgorilla is a power conscious hardware approach that provides multimedia data with practical security over a ubiquitous network.

The problem statement and the course of the research of this study are described in more detail

Faced with the progressive ubiquitous environment, we have experienced the alternative requirements, diversity or security [1]. Figure 1 illustrates that the diversity of the various devices has caused illegal attacks, intrusions, pretensions and so forth. When diversity is from the small mobile phone and the PDA to large traditional servers, in normal circumstances ubiquitous networks are functional and useful, but they are hard to control in abnormal circumstances. Since diversity brings about open access to ubiquitous networks anytime

in this section. In addition, the area of application is also explained.

**2.1. Trends and issues of ubiquitous networks**

anywhere, this really threatens user security.

as a whole.

from a register file to a data cache.

**2. Preliminaries**


In order to solve the fundamental issues described above, power conscious management of ubiquitous networks and cryptographic protection of massive data spreading over ubiquitous networks are required. It is really the practice of cryptography network security technologies to show optimum design for the trade-off to achieve specific features of the ubiquitous network. The double cipher scheme presented in this paper combines two cipher algorithms [4]. One is random number addressing cryptography (RAC), closely related to the internal behaviour of the processors. RAC is a transposition cipher devised from the direct connection of a built-in random number generator (RNG), a register file and a data cache. The register file plays the role of a streaming buffer. A random store, based on the direct connection, scrambles or transposes a series of multimedia data at random without any special encryption operation. The other algorithm of the double cipher is a data sealing algorithm. This is implemented during the data transfer from the register file to the data cache, by using another built-in RNG. This complements the RAC's shortcoming and enhances the security of the data information as a whole.

Since the double cipher scheme uses built-in RNGs at the micro-operation level, it is more effective than normal usage based on the processing of random number operands at the instruction level. In addition, the double cipher scheme requires no additional chip area or power dissipation. A linear feedback shift register (LFSR) is used as the RNG to achieve a longer cycle with negligible additional area. Thus, the double cipher scheme is a microarchi‐ tecture-based, software-transparent hardware cipher that offers security for the whole data with negligible hardware cost and moderate performance overhead. This is very suited to very large scale integration (VLSI) implementation. The VLSI implementation of the double cipher follows the multicore structure for bi-directional communication and multiple pipelines for multimedia processing and cipher streaming [5]. The cipher streaming is executed by the SIMD (Single Instruction stream Multiple Data stream) mode cipher and decipher codes. They do not attach operands, as is described above, but repeat instances to transfer byte-structured data from a register file to a data cache.

In this paper, we describe the double cipher scheme, hardware algorithm, architectural organization, structural aspect, internal behaviour and VLSI implementation of the double cipher in a sophisticated ubiquitous processor named HCgorilla by using a 0.18-μm standard cell CMOS chip. We evaluate the prospective specifications of the HCgorilla chip with respect to the hardware resource or cost, power dissipation, throughput and cipher strength. Potential aspects compared with the usual security techniques, cipher techniques and cryptography module-embedded processors are also described. HCgorilla is a power conscious hardware approach that provides multimedia data with practical security over a ubiquitous network.

#### **2. Preliminaries**

implicitly digital signing. In view of the running time, the encryption of long-length multimedia data, such as an image, is definitely outside the TPM. The other approach, back-end security, protects huge amounts of data because multimedia information, crucial for the interaction between ubiquitous devices and human beings, uses massive amounts of data. Back-end security is usually covered by common key schemes. Common key module-embedded processors are built-in cryptography

**ii.** On the other hand, considering that cipher algorithms are open to third parties in the

restrictions for mobile devices are inevitably imposed on the processors.

computing.

**iii.** Another issue of ubiquitous computing is its strong dependency on embedded

In order to solve the fundamental issues described above, power conscious management of ubiquitous networks and cryptographic protection of massive data spreading over ubiquitous networks are required. It is really the practice of cryptography network security technologies to show optimum design for the trade-off to achieve specific features of the ubiquitous network. The double cipher scheme presented in this paper combines two cipher algorithms [4]. One is random number addressing cryptography (RAC), closely related to the internal behaviour of the processors. RAC is a transposition cipher devised from the direct connection of a built-in random number generator (RNG), a register file and a data cache. The register file plays the role of a streaming buffer. A random store, based on the direct connection, scrambles

software. This has a crucial effect on the total design of ubiquitous devices. Perform‐ able features of ubiquitous devices in processing multimedia data have mainly relied on embedded software. For example, if a new protocol appears, embedded software requires users to download an update package. Despite embedding, so far, it has rapidly increased software size due to the RTOS (real time OS), firmware, application software and so on. This results in more software costs and wider memory space. Since cutting-edge ubiquitous devices need not only sophisticated and complicated processing, but also power conscious high-speed operation, the embedded software approaches taken so far will not always continue to play key roles in ubiquitous

evaluation of cipher strength, hardware specifications are more important than cipher strength in developing an ad hoc network infrastructure. A fundamental issue in maintaining the mobility of the ubiquitous environment is how to achieve powersaving mobility. The power consuming factors of ubiquitous networks are large server systems controlled by network providers and small ubiquitous platform systems, handheld devices and so forth. These small systems consist of processors, memory and displays. The power dissipation of memory strongly depends on that of a memory cell and the memory space required for embedded software. LCDs (liquid crystal displays) and processors consume similar power in running mobile devices. While the LCD is turned on only when it displays some information, processors are always in the standby state to receive calls. Thus, power-saving

processors for IC cards and portable electronic devices.

86 Theory and Practice of Cryptography and Network Security Protocols and Technologies

The problem statement and the course of the research of this study are described in more detail in this section. In addition, the area of application is also explained.

#### **2.1. Trends and issues of ubiquitous networks**

Faced with the progressive ubiquitous environment, we have experienced the alternative requirements, diversity or security [1]. Figure 1 illustrates that the diversity of the various devices has caused illegal attacks, intrusions, pretensions and so forth. When diversity is from the small mobile phone and the PDA to large traditional servers, in normal circumstances ubiquitous networks are functional and useful, but they are hard to control in abnormal circumstances. Since diversity brings about open access to ubiquitous networks anytime anywhere, this really threatens user security.

Table 2 shows various aspects of ubiquitous media. They are classified into discrete and streaming media. Both types are expressed by byte structure. Discrete media is still useful in the ubiquitous environment. Interactive games use many algorithmic processes for discrete data. Streaming media is more important because most ubiquitous applications use streaming media. This is further divided into two types in view of its complexity. Text data is one type of streaming data, because it is useful as refrain information in the event of a disaster. Con‐ sidering endless data is hard for mobile devices; the target of this work is discrete media and stream data. Yet, they need a sophisticated and complicated process. Since streaming media is massive, it is reasonable to protect the security by a common key, which is preferable to

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

**Discrete media Streaming media**

elements

Media Algorithmic process SIMD mode applications like signal processing, graphic

Although the performable features of various ubiquitous devices illustrated in Figure 1 have mainly relied on embedded software, such an approach inevitably exhausts much hardware resources and results in the deterioration of speed and power, which are worsening alongside the rapid increase in ubiquitous technologies in recent years. The majority of these technologies are resource constrained in terms of chip area and battery energy available. It is very difficult for the regular techniques of the massive quantity of multimedia information to satisfy the overall demands. Thus, a drastic improvement is required for the embedded system to achieve really promising ubiquitous devices. In this respect, a practical solution will be a security aware

The practices of an ad hoc environment require resource-constrained security. To achieve mobility, various processor chips embedded in ubiquitous platforms are designed so that the occupiedarea andenergybudget are as small aspossible [9–11].Ontheotherhand,the temporal

rendering, data compression, etc.

Examples Games, intelligent processes Text, audio, video Seismography, tsunami, traffic

**Stream data Data stream**

A sequence of data, which may be different from each other

http://dx.doi.org/10.5772/56145

89

protect large quantities of byte-structured ubiquitous information.

Definition Individual data A sequence of similar

Basic structure Byte string

**Table 2.** Ubiquitous media

**2.2. Challenge and goal**

Data handling

Buffer storage Respectable reregister file

Characteristic Discrete Stream of continuous media Size or quantity Short Long Endless Complexity Low Medium High

Security Public key Common key cryptography

high-performance sophisticated single VLSI chip processor [8].

**Figure 1.** Diversity vs. security threat in a ubiquitous network

In order to achieve secure ubiquitous networks, both machines and data must be protected from abnormal phenomena. Table 1 surveys the current status of security techniques related to the ubiquitous network. Since tremendous network issues need complicated algorithms to detect and recognize individual phenomena, in the main, software techniques have been used. However, they are inflexible to individual demands, and are not always sufficient from the practical viewpoint. The hardware implementation of an IDS (intrusion detection system) and an IPS (intrusion prevention system) is another result we exploited independently [6]. As is clear from Table 1, cryptography is used for the protection of ubiquitous platforms. The cryptography adopted for the front-end security of an individual machine is public key cryptography to protect short, password-size text data. On the other hand, common key cryptography is used for the protection of data. Comparing the numerical values in Table 1 is irrelevant because they strongly depend on process technologies.


**Table 1.** HCgorilla vs. regular security techniques

Table 2 shows various aspects of ubiquitous media. They are classified into discrete and streaming media. Both types are expressed by byte structure. Discrete media is still useful in the ubiquitous environment. Interactive games use many algorithmic processes for discrete data. Streaming media is more important because most ubiquitous applications use streaming media. This is further divided into two types in view of its complexity. Text data is one type of streaming data, because it is useful as refrain information in the event of a disaster. Con‐ sidering endless data is hard for mobile devices; the target of this work is discrete media and stream data. Yet, they need a sophisticated and complicated process. Since streaming media is massive, it is reasonable to protect the security by a common key, which is preferable to protect large quantities of byte-structured ubiquitous information.


**Table 2.** Ubiquitous media

**Figure 1.** Diversity vs. security threat in a ubiquitous network

88 Theory and Practice of Cryptography and Network Security Protocols and Technologies

**Ubiquitous device**

HCgorilla

Security chip

Secure coprocessor Cryptographic core Elliptic curve processor

Network processor

irrelevant because they strongly depend on process technologies.

Hardware

Cryptography processor Public key,

Server Software

**Table 1.** HCgorilla vs. regular security techniques

**Technique**

**HW/SW Cryptographic**

**means**

common key

In order to achieve secure ubiquitous networks, both machines and data must be protected from abnormal phenomena. Table 1 surveys the current status of security techniques related to the ubiquitous network. Since tremendous network issues need complicated algorithms to detect and recognize individual phenomena, in the main, software techniques have been used. However, they are inflexible to individual demands, and are not always sufficient from the practical viewpoint. The hardware implementation of an IDS (intrusion detection system) and an IPS (intrusion prevention system) is another result we exploited independently [6]. As is clear from Table 1, cryptography is used for the protection of ubiquitous platforms. The cryptography adopted for the front-end security of an individual machine is public key cryptography to protect short, password-size text data. On the other hand, common key cryptography is used for the protection of data. Comparing the numerical values in Table 1 is

**Target**

Password, biometrics

Common key Full text Short

Public key Password

**Speed**

**Transfer rate**

160-320 Mbps

Short 2 Mbps [7] Practical

**Running time**

Out of account for multimedia data

IDS, IPS Sampling Large Medium

**Secureness**

Practical

Large

Although the performable features of various ubiquitous devices illustrated in Figure 1 have mainly relied on embedded software, such an approach inevitably exhausts much hardware resources and results in the deterioration of speed and power, which are worsening alongside the rapid increase in ubiquitous technologies in recent years. The majority of these technologies are resource constrained in terms of chip area and battery energy available. It is very difficult for the regular techniques of the massive quantity of multimedia information to satisfy the overall demands. Thus, a drastic improvement is required for the embedded system to achieve really promising ubiquitous devices. In this respect, a practical solution will be a security aware high-performance sophisticated single VLSI chip processor [8].

#### **2.2. Challenge and goal**

The practices of an ad hoc environment require resource-constrained security. To achieve mobility, various processor chips embedded in ubiquitous platforms are designed so that the occupiedarea andenergybudget are as small aspossible [9–11].Ontheotherhand,the temporal formation of wireless and mobile ad hoc networks does not have the benefit of a permanent network infrastructure but relies on the connections themselves. Thus, a practical solution to achieveadhocsecurityissingle-chipVLSIprocessorbuilt-inhardwarecryptography.However, tothebestofourknowledge, safetyawarechips toprotectmultimediadataoveradhocnetworks have never appeared. Thus, it is a challenging task to actualize not only the processing, but also theprotection ofmultimediadata byunifying the role ofPC processors,mobileprocessors,Java CPUs, cryptography processors and so on into a ubiquitous processor [5].

The double cipher is applicable to any multimedia data because

cipher process.

**Figure 3.** PPM image file

pixels/flame) requires a bandwidth of

**i.** the double cipher falls into the category of block cipher as described in Chapter 3,

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

http://dx.doi.org/10.5772/56145

91

Image data is expressed by PPM (portable PixMap), JPEG (joint photographic expert group), BMP (bit MaP) and so on. Figure 3 exemplifies the PPM image file of a standard image. This image has 256 lines and each line consists of 256 pixels. A PPM file consists of a header and pixel data. The header contains the PPM format ID, number of pixels in width and height and graduation. The pixel data contains all the graduation elements that are the 1-byte quantization of R-, G-, or B-elements. The 1-byte R-, G-, or B-graduation elements are the target of the double

The double cipher can be applicable to cryptographic streaming. Here, the stream is a sequence of pixels, and the cryptographic streaming is the continuous encryption or decryption of a whole image and a moving picture. In view of the video display, the flame rate is 30 flames/

On the other hand, the resolution of a QVGA (quarter video graphics array) format (320×240

Figure 4 shows the scanning modes of the image data. The continuous scan follows the exact sequence of the data format. The discontinuous scan accesses the data format in a predeter‐

256×256×3×30bytes / sec≈6Mbytes / sec≈50 *Mbps* (1)

0.23×30bytes / sec≈55 *Mbps* (2)

sec. So, the resolution of the PPM format requires a bandwidth of

mined order of discrete addresses. The mixed scan mode is also shown.

**ii.** multimedia data (image, audio, text) is byte structured as shown in Table 2.

The goal of our study, described in this article, is the development of hardware cryptography, named the double cipher, to protect multimedia data over ad hoc networks and the imple‐ mentation of the double cipher into a VLSI processor named HCgorilla. The hardware algorithm of the double cipher is based on the analysis of the internal behaviour of processors. The microarchitecture-level analysis is advantageous in achieving power-conscious multime‐ dia data protection with high performance. Since power consumption and throughput are the basic metrics of processor specifications, careful attention is paid to them at each design step from the topmost architecture level to the transistor level. Actually, in recent years, the VLSI trend has exploited power conscious high performance not higher speed. Parallelism is really the global standard approach to the development of contemporary VLSI processors.

#### **2.3. Application area**

Figure 2 illustrates an application scenario of mobile phones which embed HCgorilla. Here, a standard image is multimedia data sent from the sender's mobile phone to the receiver's. Since the sender and receiver embed the same cipher chip, the entire encryption of the standard image is completely decrypted by the receiver. HCgorilla is able to carry out simultaneous processing of the encryption and decryption, taking into account bi-directional communication over networks. The common key is delivered ad hoc without relying on the network provider. Of course, public key infrastructure can be available for common key delivery. An electronic signature is also useful to certificate the message by using a security chip.

The double cipher is applicable to any multimedia data because


Image data is expressed by PPM (portable PixMap), JPEG (joint photographic expert group), BMP (bit MaP) and so on. Figure 3 exemplifies the PPM image file of a standard image. This image has 256 lines and each line consists of 256 pixels. A PPM file consists of a header and pixel data. The header contains the PPM format ID, number of pixels in width and height and graduation. The pixel data contains all the graduation elements that are the 1-byte quantization of R-, G-, or B-elements. The 1-byte R-, G-, or B-graduation elements are the target of the double cipher process.

formation of wireless and mobile ad hoc networks does not have the benefit of a permanent network infrastructure but relies on the connections themselves. Thus, a practical solution to achieveadhocsecurityissingle-chipVLSIprocessorbuilt-inhardwarecryptography.However, tothebestofourknowledge, safetyawarechips toprotectmultimediadataoveradhocnetworks have never appeared. Thus, it is a challenging task to actualize not only the processing, but also theprotection ofmultimediadata byunifying the role ofPC processors,mobileprocessors,Java

The goal of our study, described in this article, is the development of hardware cryptography, named the double cipher, to protect multimedia data over ad hoc networks and the imple‐ mentation of the double cipher into a VLSI processor named HCgorilla. The hardware algorithm of the double cipher is based on the analysis of the internal behaviour of processors. The microarchitecture-level analysis is advantageous in achieving power-conscious multime‐ dia data protection with high performance. Since power consumption and throughput are the basic metrics of processor specifications, careful attention is paid to them at each design step from the topmost architecture level to the transistor level. Actually, in recent years, the VLSI trend has exploited power conscious high performance not higher speed. Parallelism is really

the global standard approach to the development of contemporary VLSI processors.

signature is also useful to certificate the message by using a security chip.

Figure 2 illustrates an application scenario of mobile phones which embed HCgorilla. Here, a standard image is multimedia data sent from the sender's mobile phone to the receiver's. Since the sender and receiver embed the same cipher chip, the entire encryption of the standard image is completely decrypted by the receiver. HCgorilla is able to carry out simultaneous processing of the encryption and decryption, taking into account bi-directional communication over networks. The common key is delivered ad hoc without relying on the network provider. Of course, public key infrastructure can be available for common key delivery. An electronic

**2.3. Application area**

**Figure 2.** Application scenario

CPUs, cryptography processors and so on into a ubiquitous processor [5].

90 Theory and Practice of Cryptography and Network Security Protocols and Technologies

The double cipher can be applicable to cryptographic streaming. Here, the stream is a sequence of pixels, and the cryptographic streaming is the continuous encryption or decryption of a whole image and a moving picture. In view of the video display, the flame rate is 30 flames/ sec. So, the resolution of the PPM format requires a bandwidth of

$$256 \times 256 \times 3 \times 30 \text{bytes} / \sec \simeq 6 \text{Mbytes} / \sec \simeq 50 \text{ Mbps} \tag{1}$$

On the other hand, the resolution of a QVGA (quarter video graphics array) format (320×240 pixels/flame) requires a bandwidth of

$$2.0.23 \times 30 \text{bytes} / \text{sec} \approx 55 \text{ Mbps} \tag{2}$$

Figure 4 shows the scanning modes of the image data. The continuous scan follows the exact sequence of the data format. The discontinuous scan accesses the data format in a predeter‐ mined order of discrete addresses. The mixed scan mode is also shown.

In general, the cipher strength can be improved by increasing not only the key length but also all kinds of bit operations, which can be seen in the improvement from DES (data encryption standard) to AES (advanced encryption standard) [12]. This is based on the general rule of deciphering a secret key cryptography that seeks an unknown key or password, assuming that the plaintext, ciphertext and encryption algorithms are open [13]. According to this, the author proposes the double cipher scheme with two RNGs [4]. Consequently, this increases the key length. In practice, the double cipher approach promises strong cipher strength by providing double kinds of operations. Another advantage of this approach is power consciousness with negligible hardware cost and high throughput due to the microarchitecture-level hardware

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

http://dx.doi.org/10.5772/56145

93

The hardware algorithm of the double cipher is proposed based on the analysis of the internal behaviour of the processors. The additional chip area and power dissipation required for this algorithm are negligibly small. The first scheme, RAC, is a transposition cipher devised from the direct connection of a built-in RNG (LFSR), register file (buffer of external data) and a data cache. A random store based on the direct connection scrambles or transposes a series of multimedia data at random without any special encryption operation. The second scheme is a data sealing algorithm implemented during the data transfer from the register file to the data cache. This complements the RAC's shortcoming and enhances the security of data information

Figure 6 shows the basic algorithm of the double cipher in more detail. Here, *d1d2d3d4d5*

quantization of a sampling bit when the plaintext is formed into text style, image, and audio, respectively; *30241* is the corresponding key or the output of the first RNG, LFSR1; *h(d2)h(d5)h(d3)h(d1)h(d4)* is a ciphertext that is the result of the double encryption. In the execution of the RAC, the plaintext and LFSR1's output are synchronized according to their sequence. For example, the first data "*d1*" and the first random number "*3*" are synchronized. During the storage in the third location of the data cache, a hidable function *h* works for the plaintext block. The sequence of a random addressing store like this results in the formation of a cipher in the

**iii.** Transfer the specified register file's content to the synchronized data cache address.

The sequence of a random addressing store like this results in the formation of a cipher in the data cache. Double cipher decryption similarly proceeds. These micro-operations are practiced by a simple wired logic, which is effective in maintaining usability, speed and power con‐

During the transfer, a hidable function works for the plaintext block by using LFSR2

Double cipher encryption proceeds according to the following micro-operations.

**i.** Make the LFSR1 output integer specify a register file address.

**ii.** Synchronize a data cache address with the current clock count.

(*i* is an integer) is a 1-byte character, graduation element, and

mechanism.

as a whole.

data cache.

sciousness.

output.

exemplifies a plaintext block; *di*

**3.1. Proposed scheme**

**Figure 4.** Scanning modes of an image (a) Continuous scan (b) Discontinuous scan (c) Mixed scan

Figure 5 illustrates the structure and encryption of audio data. This is also formed in bytes. In the case of WAV (waveform audio file) format, it consists of a header and waveform data derived by sampling and quantizing the analogue data. The quantization derives the byte form of a sampling bit at each sampling point.

**Figure 5.** Audio data

#### **3. Double cipher**

The cipher strength does not merely depend on the algorithm of encryption itself, taking into account a round robin attack. It does not seek how to encrypt, but searches a key used in the encryption. Since the key is produced by an RNG, it is the essence of cipher strength. For example, the Vernam cipher lacks sealing ability, that is, the information of a plaintext is leaked by simply observing the ciphertext on the communication channel. Yet, it is assured that the Vernam cipher is ideally strong due to the use of a full length random number string.

In general, the cipher strength can be improved by increasing not only the key length but also all kinds of bit operations, which can be seen in the improvement from DES (data encryption standard) to AES (advanced encryption standard) [12]. This is based on the general rule of deciphering a secret key cryptography that seeks an unknown key or password, assuming that the plaintext, ciphertext and encryption algorithms are open [13]. According to this, the author proposes the double cipher scheme with two RNGs [4]. Consequently, this increases the key length. In practice, the double cipher approach promises strong cipher strength by providing double kinds of operations. Another advantage of this approach is power consciousness with negligible hardware cost and high throughput due to the microarchitecture-level hardware mechanism.

#### **3.1. Proposed scheme**

**Figure 4.** Scanning modes of an image (a) Continuous scan (b) Discontinuous scan (c) Mixed scan

92 Theory and Practice of Cryptography and Network Security Protocols and Technologies

of a sampling bit at each sampling point.

**Figure 5.** Audio data

**3. Double cipher**

Figure 5 illustrates the structure and encryption of audio data. This is also formed in bytes. In the case of WAV (waveform audio file) format, it consists of a header and waveform data derived by sampling and quantizing the analogue data. The quantization derives the byte form

The cipher strength does not merely depend on the algorithm of encryption itself, taking into account a round robin attack. It does not seek how to encrypt, but searches a key used in the encryption. Since the key is produced by an RNG, it is the essence of cipher strength. For example, the Vernam cipher lacks sealing ability, that is, the information of a plaintext is leaked by simply observing the ciphertext on the communication channel. Yet, it is assured that the

Vernam cipher is ideally strong due to the use of a full length random number string.

The hardware algorithm of the double cipher is proposed based on the analysis of the internal behaviour of the processors. The additional chip area and power dissipation required for this algorithm are negligibly small. The first scheme, RAC, is a transposition cipher devised from the direct connection of a built-in RNG (LFSR), register file (buffer of external data) and a data cache. A random store based on the direct connection scrambles or transposes a series of multimedia data at random without any special encryption operation. The second scheme is a data sealing algorithm implemented during the data transfer from the register file to the data cache. This complements the RAC's shortcoming and enhances the security of data information as a whole.

Figure 6 shows the basic algorithm of the double cipher in more detail. Here, *d1d2d3d4d5* exemplifies a plaintext block; *di* (*i* is an integer) is a 1-byte character, graduation element, and quantization of a sampling bit when the plaintext is formed into text style, image, and audio, respectively; *30241* is the corresponding key or the output of the first RNG, LFSR1; *h(d2)h(d5)h(d3)h(d1)h(d4)* is a ciphertext that is the result of the double encryption. In the execution of the RAC, the plaintext and LFSR1's output are synchronized according to their sequence. For example, the first data "*d1*" and the first random number "*3*" are synchronized. During the storage in the third location of the data cache, a hidable function *h* works for the plaintext block. The sequence of a random addressing store like this results in the formation of a cipher in the data cache.

Double cipher encryption proceeds according to the following micro-operations.


The sequence of a random addressing store like this results in the formation of a cipher in the data cache. Double cipher decryption similarly proceeds. These micro-operations are practiced by a simple wired logic, which is effective in maintaining usability, speed and power con‐ sciousness.

Regarding the quantitative aspect of the double cipher process, the double cipher scheme regulates the block length to be the same as a buffer size. On the other hand, the block width is usually fixed to a byte because ubiquitous media takes the form of a byte-structured stream. The extensibility of the block structure is useful in high-speed processing. Another effect of the extension of the block length is to make the key length long, and thus the strength of the double cipher is expected to be strong in practice. The relation between the block, *n*-bit LFSR,

s logical space size=data cache'

The block transfer is subject to transposition and substitution ciphers. The interaction between the block, register file, data cache and LFSR is as follows. Core1 carries out RAC by making the LFSR1 output specify a register file address, synchronizing a data cache address with the current clock count, and transposing the specified register file's content to the synchronized data cache address. Then, LFSR2 makes the hidable function *h* on the data lines work for the substitution of the transferred data. The resultant content, stored in the data cache, is the encryption of the register file's content. The sequence of random addressing storage like this results in the formation of a cipher in the data cache. Double cipher decryption similarly

According to the micro-operations shown in Figure 7, the architecture of the double cipher scheme-implemented VLSI processor is designed as shown in Figure 8. It is a ubiquitous processor named HCgorilla. The reason it is called a ubiquitous processor is due to its specific features for ubiquitous computing being power consciousness to achieve mobility, cost performance, simplicity, functionality, usability to actualize diversity and secureness to protect spreading platforms. HCgorilla is one of the most promising solutions for ubiquitous

The correspondence of the double core, plaintext/ciphertext, register file and data cache is obvious in Figures 7 and 8. The double core contributes to cover both bi-directional commu‐ nication and the recent trend of parallelism. Media pipes with sophisticated structures are newly added in Figure 8. This aims to cover media processing, indispensable for ubiquitous computing. Thus, HCgorilla unifies basic aspects of PC processors, mobile processors, media processors, cryptography processors and so forth and follows multicore and multiple pipe‐

No.of blocks=plain or ciphertextsize / block size (3)

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

Block size=logical space size (4)

s word length=logical space length (5)

s logical space size (6)

http://dx.doi.org/10.5772/56145

95

register file and data cache is proved as follows.

Block'

2<sup>n</sup> =register file'

proceeds within Core2.

computing.

lines.

**3.2. VLSI implementation**

Since multimedia data is much longer than the plaintext shown in Figure 6, the adoption of block ciphers is inevitable in order to satisfy the demand for data quantity and performance. Figure 7 illustrates the relation among plaintext, blocks, dominant stages of the cipher pipeline (shortened to pipes hereafter) and the double cipher process. The relation between the cipher pipe and core is clear from Figure 8. The reason that both encryption and also decryption are shown in Figure 7 is that a single ubiquitous processor should cover bi-directional communi‐ cation over networks as described in Section 2.3. A practical buffer for the external plaintext or ciphertext data is a register file whose space and speed are limited. So, the external data is divided into blocks and stored in a register file. The transfer of the block to the register file is assumed to be in DMA (direct memory access) mode, though it is not our concern in this study.

**Figure 7.** Double cipher mechanism within a single ubiquitous processor

Regarding the quantitative aspect of the double cipher process, the double cipher scheme regulates the block length to be the same as a buffer size. On the other hand, the block width is usually fixed to a byte because ubiquitous media takes the form of a byte-structured stream. The extensibility of the block structure is useful in high-speed processing. Another effect of the extension of the block length is to make the key length long, and thus the strength of the double cipher is expected to be strong in practice. The relation between the block, *n*-bit LFSR, register file and data cache is proved as follows.


The block transfer is subject to transposition and substitution ciphers. The interaction between the block, register file, data cache and LFSR is as follows. Core1 carries out RAC by making the LFSR1 output specify a register file address, synchronizing a data cache address with the current clock count, and transposing the specified register file's content to the synchronized data cache address. Then, LFSR2 makes the hidable function *h* on the data lines work for the substitution of the transferred data. The resultant content, stored in the data cache, is the encryption of the register file's content. The sequence of random addressing storage like this results in the formation of a cipher in the data cache. Double cipher decryption similarly proceeds within Core2.

#### **3.2. VLSI implementation**

Since multimedia data is much longer than the plaintext shown in Figure 6, the adoption of block ciphers is inevitable in order to satisfy the demand for data quantity and performance. Figure 7 illustrates the relation among plaintext, blocks, dominant stages of the cipher pipeline (shortened to pipes hereafter) and the double cipher process. The relation between the cipher pipe and core is clear from Figure 8. The reason that both encryption and also decryption are shown in Figure 7 is that a single ubiquitous processor should cover bi-directional communi‐ cation over networks as described in Section 2.3. A practical buffer for the external plaintext or ciphertext data is a register file whose space and speed are limited. So, the external data is divided into blocks and stored in a register file. The transfer of the block to the register file is assumed to be in DMA (direct memory access) mode, though it is not our concern in this study.

94 Theory and Practice of Cryptography and Network Security Protocols and Technologies

**Figure 7.** Double cipher mechanism within a single ubiquitous processor

**Figure 6.** Double cipher

According to the micro-operations shown in Figure 7, the architecture of the double cipher scheme-implemented VLSI processor is designed as shown in Figure 8. It is a ubiquitous processor named HCgorilla. The reason it is called a ubiquitous processor is due to its specific features for ubiquitous computing being power consciousness to achieve mobility, cost performance, simplicity, functionality, usability to actualize diversity and secureness to protect spreading platforms. HCgorilla is one of the most promising solutions for ubiquitous computing.

The correspondence of the double core, plaintext/ciphertext, register file and data cache is obvious in Figures 7 and 8. The double core contributes to cover both bi-directional commu‐ nication and the recent trend of parallelism. Media pipes with sophisticated structures are newly added in Figure 8. This aims to cover media processing, indispensable for ubiquitous computing. Thus, HCgorilla unifies basic aspects of PC processors, mobile processors, media processors, cryptography processors and so forth and follows multicore and multiple pipe‐ lines.

**iii.** Instruction level parallelism (ILP): A wave-pipelined MFU (MultiFunctional Unit) is

Another aspect of HCgorilla, specific for ubiquitous computing, is its functionality and usability. Usability is an indispensable aspect of a multimedia mobile embedded system. Platform neutrality especially is very promising in providing multimedia entertainment such as music and games, the GPS (Global Positioning System) and so forth [15, 16]. In order to fulfil this feature, sophisticated language processing is required. In this respect, Java is expected to be useful. Thus, the media pipe shown in Figure 8 is a sort of an interpreter-type Java CPU [17]. The instruction set of HCgorilla is composed of 58 Java-compatible instructions together with

In Figure 8, the "Gated clock" block is a circuit module to control gated clocking [18]. Gated clocking is a cell-based approach for power saving at the microarchitecture level. It stops the clocking of such circuit blocks with low activity that waste switching power. Since leakage power is not a critical factor in the case of the 0.18-μm CMOS standard cell process used in this study, the gated clock is very effective for power saving. HCgorilla controls the clocking of the stack access and execution stages where switching probability is higher. In addition, the media pipe introduces scan logic for DFT (design for testability). This makes the pipeline register a shift register by serially connecting the FFs in order to read, write, and retrieve the pipeline stage status. The retrieval is useful for detecting and solving design errors. The scan logic is useful for the verification of the media pipe with a sophisticated structure. On the other hand, the scan logic is not applied to the cipher pipe because the cipher pipe is simpler and easier to verify. In addition, the scan logic and hardware cryptography are inconsistent with each other because the scan logic is apt to induce side-channel attack [19]. While traditional cryptographic protocols assume that only I/O signals are available to an attacker, every cryptographic circuit leaks information through other physical channels. An attack that takes advantage of these physical channels is called a side-channel attack. Side-channel attacks exploit easily accessible information, such as power consumption, running time, I/O behaviour under malfunctions

Although Java is preferable as described above, in view of language processing, Java language systems are actually more complicated than regular language systems. The platform neutrality of Java applications is due to an intermediate form or class file produced by using Java compilers. This is really convenient for the JVM (Java virtual machine), but secondary for a processor itself. Since ubiquitous clients use small scale systems, the pre-processing of

**iv.** Microarchitecture level: Gated clocking is applied as described below.

high speed.

two SIMD mode cipher instructions.

and electromagnetic emissions.

built in the execution stage of the media pipe to achieve effective ILP. This is the combination of wave-pipelining and multifunctionalization of the arithmetic logic functions for media processing. Since the latency of the waved MFU is constant, independent of the arithmetic logic operations, the media instructions are free from scheduling [14]. The wave-pipelining is effective also in achieving power conscious

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

http://dx.doi.org/10.5772/56145

97

**Figure 8.** Architecture of HCgorilla

Each aspect specific to ubiquitous computing is achieved as follows. To begin with, secureness is achieved by the cipher pipe. The cipher pipe undertakes double encryption during the transfer from the register file to the data cache. While an LFSR controls the transposition cipher, RAC, another LFSR controls a substitution cipher or data sealing implemented by the hidable unit, HIDU (HIdable Data Unit). The double cipher executes the SIMD mode cipher and decipher codes. They do not attach operands, but repeat instances to transfer byte-structured data from a register file to a data cache. *rsw* encrypts the content stored in one half of the register file and *rlw* decrypts the content of the other half of the register file. These codes occupy the cipher pipe as long as the corresponding data stream continues. Thus, the SIMD mode sequence forms double cipher streaming.

The second aspect, power conscious resource-constrained implementation, is achieved by the design steps in the following.


Each aspect specific to ubiquitous computing is achieved as follows. To begin with, secureness is achieved by the cipher pipe. The cipher pipe undertakes double encryption during the transfer from the register file to the data cache. While an LFSR controls the transposition cipher, RAC, another LFSR controls a substitution cipher or data sealing implemented by the hidable unit, HIDU (HIdable Data Unit). The double cipher executes the SIMD mode cipher and decipher codes. They do not attach operands, but repeat instances to transfer byte-structured data from a register file to a data cache. *rsw* encrypts the content stored in one half of the register file and *rlw* decrypts the content of the other half of the register file. These codes occupy the cipher pipe as long as the corresponding data stream continues. Thus, the SIMD mode

96 Theory and Practice of Cryptography and Network Security Protocols and Technologies

The second aspect, power conscious resource-constrained implementation, is achieved by the

**i.** Architecture level parallelism: HCgorilla exploits parallelism not higher speed in

**ii.** Circuit module level: LFSR is used as the RNG built in the cipher pipe. LFSR falling

order to achieve power consciousness. Parallelism at the architecture level takes a multicore and multiple pipeline structure. Each core is composed of Java-compatible media pipes and cipher pipes. In addition, the register file and data cache are shared by the double core. Following the HW/SW co-design approach, two symmetric cores

into the category of M-sequence requires minimal additional chip area and power dissipation. A tiny n-bit LFSR produces the huge 2n-length random numbers. 1K-, 1M-, 1G-byte length texts require only 10-, 20-, and 30-bit LFSRs respectively.

sequence forms double cipher streaming.

run multiple threads in parallel.

design steps in the following.

**Figure 8.** Architecture of HCgorilla

Another aspect of HCgorilla, specific for ubiquitous computing, is its functionality and usability. Usability is an indispensable aspect of a multimedia mobile embedded system. Platform neutrality especially is very promising in providing multimedia entertainment such as music and games, the GPS (Global Positioning System) and so forth [15, 16]. In order to fulfil this feature, sophisticated language processing is required. In this respect, Java is expected to be useful. Thus, the media pipe shown in Figure 8 is a sort of an interpreter-type Java CPU [17]. The instruction set of HCgorilla is composed of 58 Java-compatible instructions together with two SIMD mode cipher instructions.

In Figure 8, the "Gated clock" block is a circuit module to control gated clocking [18]. Gated clocking is a cell-based approach for power saving at the microarchitecture level. It stops the clocking of such circuit blocks with low activity that waste switching power. Since leakage power is not a critical factor in the case of the 0.18-μm CMOS standard cell process used in this study, the gated clock is very effective for power saving. HCgorilla controls the clocking of the stack access and execution stages where switching probability is higher. In addition, the media pipe introduces scan logic for DFT (design for testability). This makes the pipeline register a shift register by serially connecting the FFs in order to read, write, and retrieve the pipeline stage status. The retrieval is useful for detecting and solving design errors. The scan logic is useful for the verification of the media pipe with a sophisticated structure. On the other hand, the scan logic is not applied to the cipher pipe because the cipher pipe is simpler and easier to verify. In addition, the scan logic and hardware cryptography are inconsistent with each other because the scan logic is apt to induce side-channel attack [19]. While traditional cryptographic protocols assume that only I/O signals are available to an attacker, every cryptographic circuit leaks information through other physical channels. An attack that takes advantage of these physical channels is called a side-channel attack. Side-channel attacks exploit easily accessible information, such as power consumption, running time, I/O behaviour under malfunctions and electromagnetic emissions.

Although Java is preferable as described above, in view of language processing, Java language systems are actually more complicated than regular language systems. The platform neutrality of Java applications is due to an intermediate form or class file produced by using Java compilers. This is really convenient for the JVM (Java virtual machine), but secondary for a processor itself. Since ubiquitous clients use small scale systems, the pre-processing of complicated class files should be covered by large servers with Java applications. Even Java bytecodes have a problem, that is, the running of Java bytecodes is time-consuming due to the interpreting process. Although JVM or JIT (just-in-time compilation) built-in runtime systems are common for mobile devices, like mobile phones, they need more ROM (read only memory) space. This degrades the usability, cost and performance features of small ubiquitous devices.

**Software**

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

http://dx.doi.org/10.5772/56145

99

**Language**

**Technology**

(a) (b)

The overall evaluation of the HCgorilla chip ranging from the hardware cost, power dissipa‐ tion, and throughput to cipher strength are described. Except for the hardware cost, quanti‐ tative measurement of the real chip and actual processor is difficult at this point. However, the simulation-based evaluation using the powerful CAD tools shown in Table 3 is reasonable enough. We prepared a DUV (design under verification) simulator and a test program run on the HCgorilla chip. Employing netlist, extracted from the chip layout, and analysing the algorithmic complexity are partly introduced. Table 4 summarizes the basic evaluation of

OS Red Hat Linux 4/CentOS 5.4

Synthesis VHDL Simulation Verilog-HDL

**Figure 10.** HCgorilla.7 (a) Structure (b) Die photo and floor planning

**4. Evaluation and discussion**

ROHM 0.18-μm CMOS Kyoto univ. Standard Cell Library

**Table 3.** Design environment of HCgorilla

Synthesis tool Synopsys - Design Compiler D-2010.03 Simulation tool Synopsys - VCS version Y-2006.06-SP1 Physical Implementation tool Synopsys - IC Compiler C-2009.06 Verification tool Mentor - Calibre v2010.02\_13.12 Equivalent verification tool Synopsys – Formality B-2008.09-SP5 Static Timing analysis tool Synopsys – Primetime pts,vA-2007.12-SP3

In order to solve the issues described above, we have so far developed the software support system for the HCgorilla chips shown in Figure 9 [20]. The system is composed of a Java interface and parallelizing compilers. For example, the software support may run on proxy servers. Web delay, installing the software support on web servers, is one of the anticipative drawbacks of this approach. Obviously, it will take some time to transfer the executable code over the Internet. However, the transfer of class files to commercial processors also takes some time. In addition, the transfer time is not so important for the evaluation of web delays [21]. The main factor in web delays is the response time of the web servers. Another concern with this approach is maintaining security during the transfer. However, transferring the executable codes over the Internet does not generate a trust problem, because Java basically seeks the global standard of the Internet.

**Figure 9.** HCgorilla, web server, software support, and parallelizing compiler

HCgorilla, shown in Figure 8, is implemented in a 0.18-μm CMOS standard cell chip. The design environment is summarized in Table 3. Figure 10 shows the chip structure, die photo and floor planning of HCgorilla.7. This corresponds to Figure 8.


**Table 3.** Design environment of HCgorilla

complicated class files should be covered by large servers with Java applications. Even Java bytecodes have a problem, that is, the running of Java bytecodes is time-consuming due to the interpreting process. Although JVM or JIT (just-in-time compilation) built-in runtime systems are common for mobile devices, like mobile phones, they need more ROM (read only memory) space. This degrades the usability, cost and performance features of small ubiquitous devices.

98 Theory and Practice of Cryptography and Network Security Protocols and Technologies

In order to solve the issues described above, we have so far developed the software support system for the HCgorilla chips shown in Figure 9 [20]. The system is composed of a Java interface and parallelizing compilers. For example, the software support may run on proxy servers. Web delay, installing the software support on web servers, is one of the anticipative drawbacks of this approach. Obviously, it will take some time to transfer the executable code over the Internet. However, the transfer of class files to commercial processors also takes some time. In addition, the transfer time is not so important for the evaluation of web delays [21]. The main factor in web delays is the response time of the web servers. Another concern with this approach is maintaining security during the transfer. However, transferring the executable codes over the Internet does not generate a trust problem, because Java basically seeks the

global standard of the Internet.

**Figure 9.** HCgorilla, web server, software support, and parallelizing compiler

and floor planning of HCgorilla.7. This corresponds to Figure 8.

HCgorilla, shown in Figure 8, is implemented in a 0.18-μm CMOS standard cell chip. The design environment is summarized in Table 3. Figure 10 shows the chip structure, die photo

**Figure 10.** HCgorilla.7 (a) Structure (b) Die photo and floor planning

#### **4. Evaluation and discussion**

The overall evaluation of the HCgorilla chip ranging from the hardware cost, power dissipa‐ tion, and throughput to cipher strength are described. Except for the hardware cost, quanti‐ tative measurement of the real chip and actual processor is difficult at this point. However, the simulation-based evaluation using the powerful CAD tools shown in Table 3 is reasonable enough. We prepared a DUV (design under verification) simulator and a test program run on the HCgorilla chip. Employing netlist, extracted from the chip layout, and analysing the algorithmic complexity are partly introduced. Table 4 summarizes the basic evaluation of HCgorilla.7 compared with a previous derivative that we developed. These employ the same 0.18-μm CMOS standard cell technology. The overall aspects, chip parameters, and hardware specifications are also shown in this table.

contradicts low resource implementation. In addition, a clear layout often withstands sidechannel attacks, tampering and so forth. Nevertheless, floor planning is indispensable for local and global clock separation, effective gated clocking and so on. Figure 11 (b) demonstrates the distribution of power dissipation derived from static evaluation, which summarizes the mean value of every cell. It does not take into account the switching condition. Figure 11 (c) shows the register file length dependency of power dissipation. The register file length swings

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

http://dx.doi.org/10.5772/56145

101

**Figure 11.** Overall evaluation of HCgorilla.7 (a) Occupied area (b) Power distribution (c) Power dissipation vs. register

The cipher pipe's throughput, that is the mean value of the number of double cipher operations

running time sec (7)

Throughput OPS <sup>=</sup> no.of double cipher operations

file length

**4.2. Throughput**

per unit of time, is derived from

backwards and forwards from HCgorilla.7's register file length, 128 words.


**Table 4.** Prospective specifications and potential aspects of HCgorilla chips

#### **4.1. Hardware cost and power consumption**

The hardware resource or cost is measured by the area occupied on the real chip, shown in Figure 10 (b). Figure 11 (a) shows the sharing of the occupied area. The portions denoted by "Stack access" and "Waved MFU" show the sum of four media pipes. The portion denoted by "Cipher pipes" shows the sum of four RNGs, register file and two HIDUs. The portion of the "D cache" is the sum of the media data cache and the cipher data cache. The media pipe, cipher pipe, and data cache employ 24,367, 270, and 20,625 cells, respectively. HCgorilla.7 and HCgorilla.6 have almost the same architecture. Yet, their chip areas are different. This is due to whether or not floor planning is undertaken. Since floor planning takes more area, it contradicts low resource implementation. In addition, a clear layout often withstands sidechannel attacks, tampering and so forth. Nevertheless, floor planning is indispensable for local and global clock separation, effective gated clocking and so on. Figure 11 (b) demonstrates the distribution of power dissipation derived from static evaluation, which summarizes the mean value of every cell. It does not take into account the switching condition. Figure 11 (c) shows the register file length dependency of power dissipation. The register file length swings backwards and forwards from HCgorilla.7's register file length, 128 words.

**Figure 11.** Overall evaluation of HCgorilla.7 (a) Occupied area (b) Power distribution (c) Power dissipation vs. register file length

#### **4.2. Throughput**

HCgorilla.7 compared with a previous derivative that we developed. These employ the same 0.18-μm CMOS standard cell technology. The overall aspects, chip parameters, and hardware

Area Chip 2.5×5 mm 5.0 mm×7.5 mm

100 Theory and Practice of Cryptography and Network Security Protocols and Technologies

Power consumption 275 mW 274 mW

Core 4.28 mm×6.94 mm

The hardware resource or cost is measured by the area occupied on the real chip, shown in Figure 10 (b). Figure 11 (a) shows the sharing of the occupied area. The portions denoted by "Stack access" and "Waved MFU" show the sum of four media pipes. The portion denoted by "Cipher pipes" shows the sum of four RNGs, register file and two HIDUs. The portion of the "D cache" is the sum of the media data cache and the cipher data cache. The media pipe, cipher pipe, and data cache employ 24,367, 270, and 20,625 cells, respectively. HCgorilla.7 and HCgorilla.6 have almost the same architecture. Yet, their chip areas are different. This is due to whether or not floor planning is undertaken. Since floor planning takes more area, it

**HCgorilla.6 HCgorilla.7**

specifications are also shown in this table.

Assembly Pad Signal 158

Design Rule ROHM 0.18-μm CMOS Wiring 1 poly Si, 5 metal layers

> VDD/VSS 32 Package PGA257

Power supply 1.8 V (I/O 3.3 V)

Instruction cache 16 bits×64 words×2 Data cache 16 bits×128 words×2 Stack memory 16 bits×16 words×8 Register file 16 bits×128 words

RNG 6 bits×2 No. of cores 2 ILP degree 4

Clock frequency 200 MHz Throughput Media pipe 0.17 GIPS

Transfer rate 160-320 Mbps

**4.1. Hardware cost and power consumption**

Cipher pipe 0.1-0.2 GOPS

**Table 4.** Prospective specifications and potential aspects of HCgorilla chips

The cipher pipe's throughput, that is the mean value of the number of double cipher operations per unit of time, is derived from

$$\text{Throughput\\_OPS} = \frac{\text{no.of double ciphertext operations}}{\text{runing time} \boxed{\text{see}}} \tag{7}$$

Since the running in Equation (7) is the repetition of the block transfer, rewriting the register file and double cipher operation, the running time is derived from

$$\text{Running time} = \text{m(t}\_1 + \text{t}\_2) + \text{t}\_3 \tag{8}$$

Here, *m* is the number of blocks. The block width is usually fixed to a byte because ubiquitous media, like pixels, takes the form of a byte-structured stream. As a consequence, the register file width is evaluated by bytes. On the other hand, the register file length is measured expediently by word as shown in Figure 7. *t*1 is block access time or the latency taken to transfer a block to the register file. *t*2 is the time of an SIMD mode cipher operation. *t*<sup>3</sup> is the latency taken to transfer a block from the data cache. *t*2 is evaluated by the DUV simulator that simulates a test program run on the HCgorilla chip. As for *t*1 and *t*3, let the memory access speed of mobile phones be 208 to 532 Mbytes/s and the mean value be adopted. Although such a method based on Equation (8) compromises the analysis, simulation and measurement, it is reliable considering that the cipher streaming of media data is undertaken regularly.

The cipher pipe's transfer rate, that is the mean value of the amount of transferred data per unit of time, is given by

$$\texttt{Transfer rate [bps]} = \frac{\texttt{full text size [b]}}{\texttt{transfer time [sec]}} \tag{9}$$

Identifying the transfer time in the denominator of Equation (9) with the running time in Equation (8), the following relation is derived.

$$\texttt{Transfer rate} \texttt{[Mbps]} \texttt{=throughput[GOPS]} \times \texttt{reigster file width} \texttt{[bJ]} \times 10^3 \tag{10}$$

Figure 12 shows the register file length dependency of HCgorilla.7's throughput in running a test program as shown in Figure 13. The register file length swings similarly to Figure 11 (c). The test program is composed of the double cipher and media processing. The plaintext used in the double cipher processing is 240×320-pixel QVGA format data. Then, the time of the SIMD mode cipher operation, *t*2, is derived and the throughput in GOPS is derived from Equations (8) and (7). Similarly, the throughput in GIPS is derived from

$$\text{Throughput\\_IPS} = \frac{\text{no.of instructions}}{\text{runing time} \boxed{\text{sec}}} \tag{11}$$

integers and Routine *B* floating point numbers. Routine *C* uses both integer and floating point numbers. The hardware parallelism is utilized by dividing the summation into four threads and assigning them into four stacks in order to make full use of the two waved MFUs. The simulation result shows that the media pipe's throughput differs little between routines *A*, *B*,

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

http://dx.doi.org/10.5772/56145

103

Figure 14 shows how to measure the double cipher strength by experimenting with a roughand-ready guess or round robin attack in a ubiquitous environment, where HCgorilla built-in platforms are used. The cipher strength is the degree of endurance against attack by a malicious third party. The attack is the third party's irregular action to decipher, break, or crack the cipher. This is clearly distinguished from decryption, that is, the right recipient's regular

process of recovering the plaintext by using the given key.

**Figure 13.** A test program and the internal behaviour of HCgorilla

and *C*.

**4.3. Cipher strength**

**Figure 12.** Throughput vs. register file length

The running time of the media processing is also derived by using the DUV simulator that simulates the test program. The media pipe's throughput is almost constant in Figure 12. This is because the clock speed is kept constant in varying the length of the register file.

In order to justify the instruction scheduling the free media pipe, the media processing of the test program is coded in three ways, that is, routines *A*, *B*, and *C*. These are distinguished in that the variable *k* and loop count are integers or floating point numbers. Routine *A* uses only

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations http://dx.doi.org/10.5772/56145 103

**Figure 12.** Throughput vs. register file length

Since the running in Equation (7) is the repetition of the block transfer, rewriting the register

Here, *m* is the number of blocks. The block width is usually fixed to a byte because ubiquitous media, like pixels, takes the form of a byte-structured stream. As a consequence, the register file width is evaluated by bytes. On the other hand, the register file length is measured expediently by word as shown in Figure 7. *t*1 is block access time or the latency taken to transfer a block to the register file. *t*2 is the time of an SIMD mode cipher operation. *t*<sup>3</sup> is the latency taken to transfer a block from the data cache. *t*2 is evaluated by the DUV simulator that simulates a test program run on the HCgorilla chip. As for *t*1 and *t*3, let the memory access speed of mobile phones be 208 to 532 Mbytes/s and the mean value be adopted. Although such a method based on Equation (8) compromises the analysis, simulation and measurement, it is

reliable considering that the cipher streaming of media data is undertaken regularly.

Transfer rate bps <sup>=</sup> full text size <sup>b</sup>

The cipher pipe's transfer rate, that is the mean value of the amount of transferred data per

Identifying the transfer time in the denominator of Equation (9) with the running time in

Figure 12 shows the register file length dependency of HCgorilla.7's throughput in running a test program as shown in Figure 13. The register file length swings similarly to Figure 11 (c). The test program is composed of the double cipher and media processing. The plaintext used in the double cipher processing is 240×320-pixel QVGA format data. Then, the time of the SIMD mode cipher operation, *t*2, is derived and the throughput in GOPS is derived from Equations

Throughput IPS <sup>=</sup> no.of instructions

is because the clock speed is kept constant in varying the length of the register file.

The running time of the media processing is also derived by using the DUV simulator that simulates the test program. The media pipe's throughput is almost constant in Figure 12. This

In order to justify the instruction scheduling the free media pipe, the media processing of the test program is coded in three ways, that is, routines *A*, *B*, and *C*. These are distinguished in that the variable *k* and loop count are integers or floating point numbers. Routine *A* uses only

Transfer rate Mbps =throughput GOPS ×register file width b ×10<sup>3</sup> (10)

Running time=m(t1 + t2) + t3 (8)

transfer time sec (9)

running time sec (11)

file and double cipher operation, the running time is derived from

102 Theory and Practice of Cryptography and Network Security Protocols and Technologies

unit of time, is given by

Equation (8), the following relation is derived.

(8) and (7). Similarly, the throughput in GIPS is derived from

**Figure 13.** A test program and the internal behaviour of HCgorilla

integers and Routine *B* floating point numbers. Routine *C* uses both integer and floating point numbers. The hardware parallelism is utilized by dividing the summation into four threads and assigning them into four stacks in order to make full use of the two waved MFUs. The simulation result shows that the media pipe's throughput differs little between routines *A*, *B*, and *C*.

#### **4.3. Cipher strength**

Figure 14 shows how to measure the double cipher strength by experimenting with a roughand-ready guess or round robin attack in a ubiquitous environment, where HCgorilla built-in platforms are used. The cipher strength is the degree of endurance against attack by a malicious third party. The attack is the third party's irregular action to decipher, break, or crack the cipher. This is clearly distinguished from decryption, that is, the right recipient's regular process of recovering the plaintext by using the given key.

Cipher strength=time for the round robin attack= =no.of round robin attacks×clock cycle time≤2LFSR1 size+LFSR2 size×clock cycle time

holds from Figure 13, the register file length is a critical factor of cipher strength. However, enlarging memory size surely causes an increase in power dissipation, the deterioration of clock speed, throughput and so forth. Thus, the demand of cipher strength is inevitably limited.

Figure 14 (b) shows the flow of measuring the number of round robin attacks in Equation (12). A result is achieved for every round robin attack. *j* is the number of blocks. The reason the measurement steps are distinguished in the cases of *j*=0 and *j*>1 is because the same random number sequence is issued for all the blocks from Figure 13. *k* is the number of RAC trial attacks. *l* is the number of HIDU trial attacks. Counting *k* and *l* through the experiment, the double cipher strength is derived from the number of nested loops or the time needed to decipher. This evaluates the degree of endurance or the strength. Actually, the cryptographic strength is the number of attack trials multiplied by the time for decryption. Each of the nested loops guesses a key at random, decrypts the ciphertext by using the key and judges if the decipher

Figure 15 shows the cipher strength achieved by practicing the method shown in Figure 14 and by using the 240×320-pixel QVGA format data, which is the same test data as is used by the test program shown in Figure 13. Note that the test data does not affect the cipher strength from Figure 13 and Figure 14 (b). It depends entirely on the block size or the half size of the register file because the blocks, after the success of the first attack, are simply decrypted by the known key. The abscissa is notched by the full length of the register file and the half size indicates a logical space. Although, from Table 3, HCgorilla.7's register file length is 128 words,

{ register file length

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

<sup>2</sup> } (13)

http://dx.doi.org/10.5772/56145

LFSR1 or LFSR2 size≥log2

Since

is successful.

**Figure 15.** Cipher strength vs. register file length

(12)

105

**Figure 14.** Measurement of double cipher strength (a) Round robin attack (b) Measurement flow

According to a normal scenario, the rules applied in deciphering the secret key cryptography in Figure 14 are as follows.


A true key is sought out in deciphering. Sometimes it is called a password. The reason a plaintext and a ciphertext are open is because they are numerous and so, in turn, their quantity is beyond protection. In addition, it is reasonable that the cipher algorithm, or its specification, is open because its value is in its usability in the communication stages. This demands the spread of the algorithm in certain communities.

LFSR1 and LFSR2 in Figure 14 are RNGs for the double cipher built in the cipher pipes of a sender and a right recipient. They are also used by third parties according to rule (a). Key1 and Key2 are the initial values of LFSR1 and LFSR2 issued by the sender. Further encryption of these secret keys that are the target of attack is conventionally applied to maintain their confidentiality. For example, WEP cipher keys are encrypted by the RC4 cipher. In Figure 14, a public key system is available to exchange the key between a sender platform and a right recipient platform.

Text1 is a plaintext/ciphertext and Text2 is a ciphertext/plaintext derived by applying Key1 and Key2 to Text1. Key*A* and Key*B* are the guesses for Key1 and Key2 and are the initial values of the third party's LFSR1 and LFSR2. RNG*A* and RNG*B*, which are completely independent of LFSR1 and LFSR2, are used for rough-and-ready guesses or random guesses by the third party. Text3 is the guess of Text1 by the third party. When Text3 disagrees with Text1, one of RNG*A* and RNG*B* is forced to proceed to the next stage. If the disagreement continues by the end of the cycle of random number generation, the comparison of Text3 and Text1 is repeated by using the other RNG. Thus, the round robin attack against the double cipher undergoes nested loops.

From the discussion described above, the cipher strength is given by

Cipher strength=time for the round robin attack= =no.of round robin attacks×clock cycle time≤2LFSR1 size+LFSR2 size×clock cycle time (12)

Since

(a) (b)

According to a normal scenario, the rules applied in deciphering the secret key cryptography

**ii.** The key or the initial value of the RNG used in encryption is secret from third parties,

A true key is sought out in deciphering. Sometimes it is called a password. The reason a plaintext and a ciphertext are open is because they are numerous and so, in turn, their quantity is beyond protection. In addition, it is reasonable that the cipher algorithm, or its specification, is open because its value is in its usability in the communication stages. This demands the

LFSR1 and LFSR2 in Figure 14 are RNGs for the double cipher built in the cipher pipes of a sender and a right recipient. They are also used by third parties according to rule (a). Key1 and Key2 are the initial values of LFSR1 and LFSR2 issued by the sender. Further encryption of these secret keys that are the target of attack is conventionally applied to maintain their confidentiality. For example, WEP cipher keys are encrypted by the RC4 cipher. In Figure 14, a public key system is available to exchange the key between a sender platform and a right

Text1 is a plaintext/ciphertext and Text2 is a ciphertext/plaintext derived by applying Key1 and Key2 to Text1. Key*A* and Key*B* are the guesses for Key1 and Key2 and are the initial values of the third party's LFSR1 and LFSR2. RNG*A* and RNG*B*, which are completely independent of LFSR1 and LFSR2, are used for rough-and-ready guesses or random guesses by the third party. Text3 is the guess of Text1 by the third party. When Text3 disagrees with Text1, one of RNG*A* and RNG*B* is forced to proceed to the next stage. If the disagreement continues by the end of the cycle of random number generation, the comparison of Text3 and Text1 is repeated by using the other RNG. Thus, the round robin attack against the double cipher undergoes

From the discussion described above, the cipher strength is given by

**i.** A plaintext, a ciphertext and the cipher algorithm are open to third parties.

**Figure 14.** Measurement of double cipher strength (a) Round robin attack (b) Measurement flow

104 Theory and Practice of Cryptography and Network Security Protocols and Technologies

though it is open to the right recipient.

spread of the algorithm in certain communities.

in Figure 14 are as follows.

recipient platform.

nested loops.

$$\text{LFSR1 or LFSR2 size} \ge \log\_2 \left\{ \frac{\text{register file length}}{2} \right\} \tag{13}$$

holds from Figure 13, the register file length is a critical factor of cipher strength. However, enlarging memory size surely causes an increase in power dissipation, the deterioration of clock speed, throughput and so forth. Thus, the demand of cipher strength is inevitably limited.

Figure 14 (b) shows the flow of measuring the number of round robin attacks in Equation (12). A result is achieved for every round robin attack. *j* is the number of blocks. The reason the measurement steps are distinguished in the cases of *j*=0 and *j*>1 is because the same random number sequence is issued for all the blocks from Figure 13. *k* is the number of RAC trial attacks. *l* is the number of HIDU trial attacks. Counting *k* and *l* through the experiment, the double cipher strength is derived from the number of nested loops or the time needed to decipher. This evaluates the degree of endurance or the strength. Actually, the cryptographic strength is the number of attack trials multiplied by the time for decryption. Each of the nested loops guesses a key at random, decrypts the ciphertext by using the key and judges if the decipher is successful.

Figure 15 shows the cipher strength achieved by practicing the method shown in Figure 14 and by using the 240×320-pixel QVGA format data, which is the same test data as is used by the test program shown in Figure 13. Note that the test data does not affect the cipher strength from Figure 13 and Figure 14 (b). It depends entirely on the block size or the half size of the register file because the blocks, after the success of the first attack, are simply decrypted by the known key. The abscissa is notched by the full length of the register file and the half size indicates a logical space. Although, from Table 3, HCgorilla.7's register file length is 128 words,

**Figure 15.** Cipher strength vs. register file length

to understand the dependency of the cipher strength, the register file length is varied from 32 to 512 words. Correspondingly, the size of LFSR is varied from 5 to 9 bits. The measurements are undertaken five times for each length. The dotted lines show the upper limit or the maximum number of the round robin attacks in the right hand side of Equation (11). The maximum strength of the double cipher is proportional to 2LFSR1 size+LFSR2 size from Figure 14 (b). Similarly, the single cipher reaches 2LFSR1 size.

causes more power dissipation. In fact, from Figure 11 (c), the power dissipation

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

**iii.** Reduce *t*<sup>2</sup> by the increase in speed of the cipher pipe's clock. Increasing the number

Table 5 summarizes various aspects of the double cipher vs. usual common key ciphers. The

**i.** RAC simplifies the processing of multimedia data, because RAC directly handles a

**ii.** RAC allows expandable block length. Different from usual common key ciphers, RAC

**iii.** RAC handles wider blocks. The byte string is wider than the bit string used by other usual ciphers. At this point, the width is 2 bytes and is 16 times wider.

**iv.** RAC encrypts a plaintext block without any arithmetic logic operation. The cipher

byte string whose structure is the same as that of the multimedia data as shown in

does not fix the block length, but regulates it to be the same as the buffer size. Although the register file's logical length is 64 words at this point, we are planning

mechanism due to the random transfer between a register file and a data cache is

**Transforma tion**

Bitwise XOR, scramble, shift, etc.

These aspects allow double cipher higher throughput, shorter running time and practical strength. The throughput is given by the product of block data size and clock frequency, assuming the processing of one block per clock. Thus, higher clock frequency together with expandable data size provides higher throughput. Since the block data size is the product of expandable block length and width, it is allowed to increase with ease. The expandable block length allows the double cipher to have practical cipher strength. The extension of the block

Vernam Bit Full length Short Strong Large

**Throughput**

Needless High Medium Practically

**strength String Resource**

**Running time**

**Cipher**

http://dx.doi.org/10.5772/56145

107

Small

strong

Medi-um Medi-um Small

Long Strong Large

rapidly increases from the 128-word length.

of pipeline stages is also useful for this aim.

to increase it.

Double cipher

Data sealing

A5

Block AES

quite different from other ciphers.

RAC Byte As long as a buffer

AES-CTR 128 bits 1-2 times

Stream LFSR A few bits or a

DES 64 bits

**Table 5.** Double cipher vs. regular common key cryptography

double cipher, especially RAC, has the following characteristic aspects.

Table 2. The effect of simplicity ranges over every aspect.

**Block Cipher means**

**unit Length Key**

character

(register file) length

Bit Bitwise XOR

length of AES key

#### **4.4. Discussion**

Overall, the discussion is based on the evaluation described above. In view of the cipher strength, Figure 15 indicates that a longer buffer size is desirable. The double cipher increases the cipher strength as the key length or the cycle of random numbers expands. Although the hardware implementation of longer cycle random number generation is very easy, it surely involves a power consuming increase in the size of the stream buffer or register file. Consid‐ ering that cipher algorithms are open to third parties in the evaluation of cipher strength, hardware specifications are more important than cipher strength in developing HCgorilla. While the power dissipation rapidly increases from the 128-word length in Figure 11 (c), the cipher pipe's throughput almost saturates at the 128-word length in Figure 12. The 128-word length is the optimum buffer size because (*i*) the power dissipation of mobile processors is usually less than 1 watt, and (*ii*) the cipher pipe's transfer rate shown in Table 4 is comparable to that of an ATM.

In view of cipher streaming, 0.1 GOPS is allowable for video format, because the running time used for cipher streaming occupies a very small portion of the video processing time. Actually, 1-Mbyte of text forms 4.3 flames of QVGA format. It takes 143 msec in video processing. The running time of cipher streaming is only 3.6% of 143-msec video processing. On the other hand, a 1-minute video takes 2.2-sec running time for cipher streaming, because, as shown in Equation (2), the text size is 414 Mbytes from the bandwidth. In the case of PPM format, 1- Mbyte of text forms 5 flames. This takes 167 msec in video processing, that is, only 3%. Then, as shown in Equation (1), a 1-minute video's text size is 360 Mbytes from the bandwidth. In this case, cipher streaming takes only 1.9 sec.

However, HCgorilla's throughput in GOPS is not always reasonable in view of CPU perform‐ ance. The throughput of commercial mobile processors is more than 10 GOPS, though this has the benefit of the cutting-edge technologies of process, clock and hardware parallelism. In order to further enhance HCgorilla's GOPS value, which directly affects the increase in Mbps value, the running time should be decreased from Equation (7). This is possible with respect to the following strategies.


causes more power dissipation. In fact, from Figure 11 (c), the power dissipation rapidly increases from the 128-word length.

**iii.** Reduce *t*<sup>2</sup> by the increase in speed of the cipher pipe's clock. Increasing the number of pipeline stages is also useful for this aim.

Table 5 summarizes various aspects of the double cipher vs. usual common key ciphers. The double cipher, especially RAC, has the following characteristic aspects.




to understand the dependency of the cipher strength, the register file length is varied from 32 to 512 words. Correspondingly, the size of LFSR is varied from 5 to 9 bits. The measurements are undertaken five times for each length. The dotted lines show the upper limit or the maximum number of the round robin attacks in the right hand side of Equation (11). The maximum strength of the double cipher is proportional to 2LFSR1 size+LFSR2 size from Figure 14 (b).

Overall, the discussion is based on the evaluation described above. In view of the cipher strength, Figure 15 indicates that a longer buffer size is desirable. The double cipher increases the cipher strength as the key length or the cycle of random numbers expands. Although the hardware implementation of longer cycle random number generation is very easy, it surely involves a power consuming increase in the size of the stream buffer or register file. Consid‐ ering that cipher algorithms are open to third parties in the evaluation of cipher strength, hardware specifications are more important than cipher strength in developing HCgorilla. While the power dissipation rapidly increases from the 128-word length in Figure 11 (c), the cipher pipe's throughput almost saturates at the 128-word length in Figure 12. The 128-word length is the optimum buffer size because (*i*) the power dissipation of mobile processors is usually less than 1 watt, and (*ii*) the cipher pipe's transfer rate shown in Table 4 is comparable

In view of cipher streaming, 0.1 GOPS is allowable for video format, because the running time used for cipher streaming occupies a very small portion of the video processing time. Actually, 1-Mbyte of text forms 4.3 flames of QVGA format. It takes 143 msec in video processing. The running time of cipher streaming is only 3.6% of 143-msec video processing. On the other hand, a 1-minute video takes 2.2-sec running time for cipher streaming, because, as shown in Equation (2), the text size is 414 Mbytes from the bandwidth. In the case of PPM format, 1- Mbyte of text forms 5 flames. This takes 167 msec in video processing, that is, only 3%. Then, as shown in Equation (1), a 1-minute video's text size is 360 Mbytes from the bandwidth. In

However, HCgorilla's throughput in GOPS is not always reasonable in view of CPU perform‐ ance. The throughput of commercial mobile processors is more than 10 GOPS, though this has the benefit of the cutting-edge technologies of process, clock and hardware parallelism. In order to further enhance HCgorilla's GOPS value, which directly affects the increase in Mbps value, the running time should be decreased from Equation (7). This is possible with respect

**ii.** Reduce the summation of block access and transfer times, ∑(*t*1+*t*3), by increasing the

register file size. Expanding the register file length leads to an increase in cipher strength. However, it needs to take into account the trade-off between throughput and power dissipation. Judging from Equation (7), increasing the register file size

**i.** Reduce *t*1 and *t*3 by using a memory buffer with faster access speed.

Similarly, the single cipher reaches 2LFSR1 size.

106 Theory and Practice of Cryptography and Network Security Protocols and Technologies

this case, cipher streaming takes only 1.9 sec.

to the following strategies.

**4.4. Discussion**

to that of an ATM.

These aspects allow double cipher higher throughput, shorter running time and practical strength. The throughput is given by the product of block data size and clock frequency, assuming the processing of one block per clock. Thus, higher clock frequency together with expandable data size provides higher throughput. Since the block data size is the product of expandable block length and width, it is allowed to increase with ease. The expandable block length allows the double cipher to have practical cipher strength. The extension of the block length surely makes the key length long. Thus, the double cipher strength is expected to be strong in practice, because the cipher strength is closely related to the key length. The evalu‐ ation of running time is based on computational complexity and the dominant factor is the total number of iterative loops. AES has nesting loops for arithmetic, logic and functional operations. The first loop is for matrix operation and the second for rounds. However, RAC is released from such complexity.

**References**

[1] Saha, D, & Mukherjee, A. Pervasive Computing: A Paradigm for the 21st Century.

A Double Cipher Scheme for Applications in Ad Hoc Networks and its VLSI Implementations

http://dx.doi.org/10.5772/56145

109

[2] Wu, J, & Stojmenovic, I. Ad Hoc Networks. Computer Magazine (2004)., 37(2), 29–31.

[3] Satyanarayanan, M. Privacy: The Achilles Heel of Pervasive Computing? IEEE Per‐

[4] Fukase, M, Uchiumi, H, Ishihara, T, Osumi, Y, & Sato, T. Cipher and Media Possibili‐ ty of a Ubiquitous Processor: proceedings of International Symposium on Communi‐ cations and Information Technologies, ISCIT (2009). September 2009, Incheon,

[5] Fukase, M, & Sato, T. Double Cipher Implementation in a Ubiquitous Processor

[6] Sato, T, Imaruoka, S, & Fukase, M. Hardware-Based IPS for Embedded Systems: pro‐ ceedings of the 13th World Multi-Conference on Systemics, Cybernetics and Informat‐

[7] Chikazawa, T, & Matui, M. Globalization of Japanese Cryptographic Technology.

[8] Jerraya, A, Tenhunen, H, & Wolf, W. Multiprocessor Systems-on-Chips. Computer

[9] Oppliger, R. Security and Privacy in an Online World. Computer Magazine (2011). ,

[10] Stavrou, A, Voas, J, Karygiannis, T, & Quirolgico, S. Building Security into Off-the-

[11] Burns, F, Bystrov, A, Koelmans, A, & Yakovlev, A. Security Evaluation of Balanced 1 of-n Circuits. IEEE Transactions on VLSI Systems (2011). , 19(11), 2135-2139.

[12] Wang, M-Y, Su, C-P, Horng, C-L, Wu, C, Huang, W, & Single- And, C. -T. Multi-core Configurable AES Architectures for Flexible Security. IEEE Transactions on VLSI

[13] Matsui, M. Survey of the Research and Development of MISTY Cryptography. Jour‐

[14] Fukase, M, & Sato, T. A Ubiquitous Processor Built-in a Waved Multifunctional Unit.

[15] Lawton, G. Moving Java into Mobile Phones. Computer Magazine (2002). , 35(6),

Chip. American Journal of Computer Architecture (2012). , 1(1), 6-11.

ics, WMSCI 2009, IIII July (2009). Orlando, Florida., 74-79.

shelf Smartphones. Computer Magazine (2012). , 45(2), 82-84.

Journal of Digital Practice (2011). , 2(4), 267-273.

Magazine (2005). , 38(7), 36-40.

Systems (2010). , 18(4), 541-552.

nal of Digital Practice (2011). , 2(4), 282-289.

ECTI-CIT Transactions (2010). , 4(1), 1-7.

44(9), 21-22.

17-20.

Computer Magazine (2003). , 36(3), 25-31.

vasive Computing (2003). , 2(1), 2-3.

Korea., 2009, 343-347.

#### **5. Conclusion**

The author has proposed a cipher scheme useful in practice for ad hoc networks with tempo‐ rarily sufficient strength. The proposal is based on two cipher schemes. The first scheme is based on RAC and the second uses a data sealing algorithm. This double cipher scheme can be implemented in a security aware, power conscious and high-performance single VLSI chip processor by using built-in RNGs. Streaming the buffer size is determined from the trade-off between cipher strength, power dissipation and throughput. In practice, this is important because hardware specifications are more important than cipher strength in VLSI implemen‐ tation.

HCgorilla is a sophisticated ubiquitous processor implementing the double cipher scheme. The VLSI implementation of HCgorilla is undertaken by using a 0.18-μm standard cell CMOS chip. The hardware cost, power dissipation, throughput and cipher strength of the latest HCgorilla chip are evaluated from real chip, logic synthesis and simulation by using CAD tools. Examining algorithmic complexity is partly used. The evaluation shows that HCgorilla is a power conscious, high-performance hardware approach that treats multimedia data with practical security over a ubiquitous network.

The future work of this research is the implementation of the double cipher into HCgorilla's media pipe. Although the cipher pipe and the media pipe are explicitly distinguished from each other in this study, the mixing of instruction scheduling free media processing with cipher processing at the microarchitecture level will further contribute power conscious security in ad hoc networks. Since such improvement applies the scan logic to encrypted data flow, an additional problem is raised, that is, whether the scan logic is able to avoid attacks by third parties. Apart from the disclosure of cipher algorithms, a design tolerant to side-channel attack and resistant to tamper is inevitable for VLSI implementation.

#### **Author details**

Masa-aki Fukase

Address all correspondence to: slfuka@eit.hirosaki-u.ac.jp

Graduate School of Science and Technology, Hirosaki University, Hirosaki, Japan

#### **References**

length surely makes the key length long. Thus, the double cipher strength is expected to be strong in practice, because the cipher strength is closely related to the key length. The evalu‐ ation of running time is based on computational complexity and the dominant factor is the total number of iterative loops. AES has nesting loops for arithmetic, logic and functional operations. The first loop is for matrix operation and the second for rounds. However, RAC is

108 Theory and Practice of Cryptography and Network Security Protocols and Technologies

The author has proposed a cipher scheme useful in practice for ad hoc networks with tempo‐ rarily sufficient strength. The proposal is based on two cipher schemes. The first scheme is based on RAC and the second uses a data sealing algorithm. This double cipher scheme can be implemented in a security aware, power conscious and high-performance single VLSI chip processor by using built-in RNGs. Streaming the buffer size is determined from the trade-off between cipher strength, power dissipation and throughput. In practice, this is important because hardware specifications are more important than cipher strength in VLSI implemen‐

HCgorilla is a sophisticated ubiquitous processor implementing the double cipher scheme. The VLSI implementation of HCgorilla is undertaken by using a 0.18-μm standard cell CMOS chip. The hardware cost, power dissipation, throughput and cipher strength of the latest HCgorilla chip are evaluated from real chip, logic synthesis and simulation by using CAD tools. Examining algorithmic complexity is partly used. The evaluation shows that HCgorilla is a power conscious, high-performance hardware approach that treats multimedia data with

The future work of this research is the implementation of the double cipher into HCgorilla's media pipe. Although the cipher pipe and the media pipe are explicitly distinguished from each other in this study, the mixing of instruction scheduling free media processing with cipher processing at the microarchitecture level will further contribute power conscious security in ad hoc networks. Since such improvement applies the scan logic to encrypted data flow, an additional problem is raised, that is, whether the scan logic is able to avoid attacks by third parties. Apart from the disclosure of cipher algorithms, a design tolerant to side-channel attack

released from such complexity.

practical security over a ubiquitous network.

and resistant to tamper is inevitable for VLSI implementation.

Address all correspondence to: slfuka@eit.hirosaki-u.ac.jp

Graduate School of Science and Technology, Hirosaki University, Hirosaki, Japan

**5. Conclusion**

tation.

**Author details**

Masa-aki Fukase


[16] Kochnev, D. S, & Terekhov, A. A. Surviving Java for Mobiles. IEEE Pervasive Com‐ puting (2003). , 2(2), 90-95.

**Chapter 5**

**Introduction to Quantum Cryptography**

Broadly speaking, cryptography is the problem of doing communication or computation involving two or more parties who may not trust one another. The best known cryptographic problem is the transmission of secret messages. Suppose wish to communicate in secret. For example, you may wish to give your credit card number to a merchant in exchange for goods, hopefully without any malevolent third party intercepting your credit card number. The way this is done is to use a cryptographic protocol. The most important distinction is between

The way a private key cryptosystem works is that two parties, 'Alice' and 'Bob', wish to communicate by sharing a private key, which only they know. The exact form of the key doesn't matter at this point – think of a string of zeroes and ones. The point is that this key is used by Alice to encrypt the information she wishes to send to Bob. After Alice encrypts she sends the encrypted information to Bob, who must now recover the original information. Exactly how Alice encrypts the message depends upon the private key, so that to recover the original message Bob needs to know the private key, in order to undo the transformation Alice applied. Unfortunately, private key cryptosystems have some severe problems in many contexts. The most basic problem is how to distribute the keys? In many ways, the key distribution problem is just as difficult as the original problem of communicating in private – a malevolent third party may be eavesdropping on the key distribution, and then use the intercepted key to

One of the earliest discoveries in quantum computation and quantum information was that quantum mechanics can be used to do key distribution in such a way that Alice and Bob's security cannot be compromised. This procedure is known as **quantum cryptography** or **quantum key distribution** (abbreviated QKD). The basic idea is to exploit the quantum mechanical principle that observation in general disturbs the system being observed. Thus, if

> © 2013 Tan; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Tan; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Additional information is available at the end of the chapter

private key cryptosystems and public key cryptosystems.

decrypt some of the message transmission.

Xiaoqing Tan

**1. Introduction**

http://dx.doi.org/10.5772/56092


## **Introduction to Quantum Cryptography**

### Xiaoqing Tan

[16] Kochnev, D. S, & Terekhov, A. A. Surviving Java for Mobiles. IEEE Pervasive Com‐

[17] Chen, K-Y, Chang, J. M, & Hou, T-W. Multithreading in Java: Performance and Scala‐ bility on Multicore Systems. IEEE Transactions on Computers (2011). , 60(11),

[18] Lee, Y, Jeong, D-K, & Kim, T. Comprehensive Analysis and Control of Design Pa‐ rameters for Power Gated Circuits. IEEE Transactions on VLSI Systems (2011). , 19(3),

[19] Alioto, M, Poli, M, & Rocchi, S. A General Power Model of Differential Power Analy‐ sis Attacks to Static Logic Circuits. IEEE Transactions on VLSI Systems (2010). , 18(5),

[20] Fukase, M. A Ubiquitous Processor Embedded With Progressive Cipher Pipelines.

[21] Zari, M, Saiedian, H, & Naeem, M. Understanding and Reducing Web Delays. Com‐

International Journal of Multimedia Technology (2013)., 3(1), 31-37.

puting (2003). , 2(2), 90-95.

110 Theory and Practice of Cryptography and Network Security Protocols and Technologies

puter Magazine (2001). , 34(12), 30-37.

1521-1534.

494-498.

711-724.

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/56092

#### **1. Introduction**

Broadly speaking, cryptography is the problem of doing communication or computation involving two or more parties who may not trust one another. The best known cryptographic problem is the transmission of secret messages. Suppose wish to communicate in secret. For example, you may wish to give your credit card number to a merchant in exchange for goods, hopefully without any malevolent third party intercepting your credit card number. The way this is done is to use a cryptographic protocol. The most important distinction is between private key cryptosystems and public key cryptosystems.

The way a private key cryptosystem works is that two parties, 'Alice' and 'Bob', wish to communicate by sharing a private key, which only they know. The exact form of the key doesn't matter at this point – think of a string of zeroes and ones. The point is that this key is used by Alice to encrypt the information she wishes to send to Bob. After Alice encrypts she sends the encrypted information to Bob, who must now recover the original information. Exactly how Alice encrypts the message depends upon the private key, so that to recover the original message Bob needs to know the private key, in order to undo the transformation Alice applied.

Unfortunately, private key cryptosystems have some severe problems in many contexts. The most basic problem is how to distribute the keys? In many ways, the key distribution problem is just as difficult as the original problem of communicating in private – a malevolent third party may be eavesdropping on the key distribution, and then use the intercepted key to decrypt some of the message transmission.

One of the earliest discoveries in quantum computation and quantum information was that quantum mechanics can be used to do key distribution in such a way that Alice and Bob's security cannot be compromised. This procedure is known as **quantum cryptography** or **quantum key distribution** (abbreviated QKD). The basic idea is to exploit the quantum mechanical principle that observation in general disturbs the system being observed. Thus, if

there is an eavesdropper listening in as Alice and Bob attempt to transmit their key, the presence of the eavesdropper will be visible as a disturbance of the communications channel Alice and Bob are using to establish the key. Alice and Bob can then throw out the key bits established while the eavesdropper was listening in, and start over.

private input and the function's output. The classic example of such discreet decision making is the "dating problem", in which two people seek a way of making a date if and only if each likes the other, without disclosing any further information. For example, if Alice likes Bob but Bob doesn't like Alice, the date should be called off without Bob finding out that Alice likes him, on the other hand, it is logically unavoidable for Alice to learn that Bob doesn't like her,

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 113

In general, the goal of quantum cryptography is to perform tasks that are impossible or intractable with conventional cryptography. Quantum cryptography makes use of the subtle properties of quantum mechanics such as the quantum no-cloning theorem and the Heisenberg uncertainty principle. Unlike conventional cryptography, whose security is often based on unproven computational assumptions, quantum cryptography has an important advantage in that its security is often based on the laws of physics. Thus far, proposed applications of quantum cryptography include QKD, quantum bit commitment and quantum coin tossing. These applications have varying degrees of success. The most successful and important application – QKD – has been proven to be unconditionally secure. Moreover, experimental QKD has now been performed over hundreds of kilometers over both standard commercial telecom optical fibers and open-air. In fact, commercial QKD systems are currently available

Classical secret sharing can be used in a number of ways besides for a joint checking account. The secret key could access a bank vault, or a computer account, or any of a variety of things. In addition, secret sharing is a necessary component for performing secure distributed computa‐ tions among a number of people who do not completely trust each other. With the boom in quantum computation, it seemspossible, even likely,that quantum states will become nearly as important as classical data. It might therefore be useful to have some way of sharing secret quantumstatesaswellas secret classicaldata.Sucha**quantumsecret sharing**(abbreviatedQSS) scheme might be useful for sharing quantum keys, such as those used in quantum key distribu‐ tion or in other quantum cryptographic protocols. In addition, QSS might allow us to take advantageoftheadditionalpowerofquantumcomputationinsecuredistributedcomputations.

Imagine that it is fifteen years from now and someone announces the successful construction of a large quantum computer. The New York Times runs a front-page article reporting that all of the public-key algorithms used to protect the Internet have been broken by quantum computer. Perhaps, after seeing quantum computers destroy RSA and DSA and ECDSA, Internet users will leap to the conclusion that cryptography is dead. For solving the problem, some researchers provided the idea about **post-quantum cryptography** which refers to research on cryptographic primitives (usually public-key cryptosystems) that are not breaka‐ ble using quantum computers. This term came about because most currently popular publickey cryptosystems rely on the integer factorization problem or discrete logarithm problem, both of which would be easily solvable on large enough quantum computers using Shor's algorithm [6] [7]. Even though current publicly known experimental quantum computing is nowhere near powerful enough to attack real cryptosystems, many cryptographers are researching new algorithms, in case quantum computing becomes a threat in the future. This

work is popularized by the PQCrypto conference series since 2006.

because if he did the date would be on.

on the market [5].

The first quantum cryptographic ideas were proposed by Stephen Wiesner wrote "Conjugate Coding"[1], which unfortunately took more than ten years to see the light of print. In the mean time, Charles H. Bennett (who knew of Wiesner's idea) and Gilles Brassard picked up the subject and brought it to fruition in a series of papers that culminated with the demonstration of an experimental prototype that established the technological feasibility of the concept [2]. Quantum cryptographic systems take advantage of Heisenberg's uncertainty principle, according to which measuring a quantum system in general disturbs it and yields incomplete information about its state before the measurement. Eavesdropping on a quantum communi‐ cation channel therefore causes an unavoidable disturbance, alerting the legitimate users. This yields a cryptographic system for the distribution of a secret random cryptographic key between two parties initially sharing no secret information that is secure against an eaves‐ dropper having at her disposal unlimited computing power. Once this secret key is established, it can be used together with classical cryptographic techniques such as the one-time-pad (OTP) to allow the parties to communicate meaningful information in absolute secrecy.

The second major type of cryptosystem is the public key cryptosystem. Public key cryptosys‐ tem don't rely on Alice and Bob sharing a secret key in advance. Instead, Bob simply publishes a 'public key', which is made available to the general public. Alice can make use of this public key to encrypt a message which she sends to Bob. The third party cannot use Bob's public key to decrypt the message. Public key cryptography did not achieve widespread use until the mid-1970s, when it was proposed independently by Whitfield Diffie and Martin Hellman, Rivest, Adi Shamir, and Leonard Adleman developed the RSA cryptosystem, which at the time of writing is the most widely deployed public key cryptosystem, believed to offer a fine balance of security and practical usability.

The key to the security of public key cryptosystems is that it should be difficult to invert the encryption stage if only the public key is available. For example, it turns out that inverting the encryption stage of RSA is a problem closely related to factoring. Much of the presumed security of RSA comes from the belief that factoring is a problem hard to solve on a classical computer. However, Shor's fast algorithm for factoring on cryptosystems which can be broken if a fast algorithm for solving the discrete logarithm problem – like Shor's quantum algorithm for discrete logarithm – were known. This practical application of quantum computers to the breaking of cryptographic codes has excited much of the interest in quantum computation and quantum information.

In addition to key distribution, quantum techniques may also assist in the achievement of subtler cryptographic goals, important in the post-cold war world, such as protecting private information while it is being used to reach public decisions. Such techniques, pioneered by Claude Crepeau [3] [4], allow two people to compute an agreed-upon function *f*(*x*; *y*) on private inputs *x* and *y* when one person knows *x*, the other knows *y*, and neither is willing to disclose anything about their private input to the other, except for what follows logically from one's private input and the function's output. The classic example of such discreet decision making is the "dating problem", in which two people seek a way of making a date if and only if each likes the other, without disclosing any further information. For example, if Alice likes Bob but Bob doesn't like Alice, the date should be called off without Bob finding out that Alice likes him, on the other hand, it is logically unavoidable for Alice to learn that Bob doesn't like her, because if he did the date would be on.

there is an eavesdropper listening in as Alice and Bob attempt to transmit their key, the presence of the eavesdropper will be visible as a disturbance of the communications channel Alice and Bob are using to establish the key. Alice and Bob can then throw out the key bits

The first quantum cryptographic ideas were proposed by Stephen Wiesner wrote "Conjugate Coding"[1], which unfortunately took more than ten years to see the light of print. In the mean time, Charles H. Bennett (who knew of Wiesner's idea) and Gilles Brassard picked up the subject and brought it to fruition in a series of papers that culminated with the demonstration of an experimental prototype that established the technological feasibility of the concept [2]. Quantum cryptographic systems take advantage of Heisenberg's uncertainty principle, according to which measuring a quantum system in general disturbs it and yields incomplete information about its state before the measurement. Eavesdropping on a quantum communi‐ cation channel therefore causes an unavoidable disturbance, alerting the legitimate users. This yields a cryptographic system for the distribution of a secret random cryptographic key between two parties initially sharing no secret information that is secure against an eaves‐ dropper having at her disposal unlimited computing power. Once this secret key is established, it can be used together with classical cryptographic techniques such as the one-time-pad (OTP)

to allow the parties to communicate meaningful information in absolute secrecy.

of security and practical usability.

quantum information.

The second major type of cryptosystem is the public key cryptosystem. Public key cryptosys‐ tem don't rely on Alice and Bob sharing a secret key in advance. Instead, Bob simply publishes a 'public key', which is made available to the general public. Alice can make use of this public key to encrypt a message which she sends to Bob. The third party cannot use Bob's public key to decrypt the message. Public key cryptography did not achieve widespread use until the mid-1970s, when it was proposed independently by Whitfield Diffie and Martin Hellman, Rivest, Adi Shamir, and Leonard Adleman developed the RSA cryptosystem, which at the time of writing is the most widely deployed public key cryptosystem, believed to offer a fine balance

The key to the security of public key cryptosystems is that it should be difficult to invert the encryption stage if only the public key is available. For example, it turns out that inverting the encryption stage of RSA is a problem closely related to factoring. Much of the presumed security of RSA comes from the belief that factoring is a problem hard to solve on a classical computer. However, Shor's fast algorithm for factoring on cryptosystems which can be broken if a fast algorithm for solving the discrete logarithm problem – like Shor's quantum algorithm for discrete logarithm – were known. This practical application of quantum computers to the breaking of cryptographic codes has excited much of the interest in quantum computation and

In addition to key distribution, quantum techniques may also assist in the achievement of subtler cryptographic goals, important in the post-cold war world, such as protecting private information while it is being used to reach public decisions. Such techniques, pioneered by Claude Crepeau [3] [4], allow two people to compute an agreed-upon function *f*(*x*; *y*) on private inputs *x* and *y* when one person knows *x*, the other knows *y*, and neither is willing to disclose anything about their private input to the other, except for what follows logically from one's

established while the eavesdropper was listening in, and start over.

112 Theory and Practice of Cryptography and Network Security Protocols and Technologies

In general, the goal of quantum cryptography is to perform tasks that are impossible or intractable with conventional cryptography. Quantum cryptography makes use of the subtle properties of quantum mechanics such as the quantum no-cloning theorem and the Heisenberg uncertainty principle. Unlike conventional cryptography, whose security is often based on unproven computational assumptions, quantum cryptography has an important advantage in that its security is often based on the laws of physics. Thus far, proposed applications of quantum cryptography include QKD, quantum bit commitment and quantum coin tossing. These applications have varying degrees of success. The most successful and important application – QKD – has been proven to be unconditionally secure. Moreover, experimental QKD has now been performed over hundreds of kilometers over both standard commercial telecom optical fibers and open-air. In fact, commercial QKD systems are currently available on the market [5].

Classical secret sharing can be used in a number of ways besides for a joint checking account. The secret key could access a bank vault, or a computer account, or any of a variety of things. In addition, secret sharing is a necessary component for performing secure distributed computa‐ tions among a number of people who do not completely trust each other. With the boom in quantum computation, it seemspossible, even likely,that quantum states will become nearly as important as classical data. It might therefore be useful to have some way of sharing secret quantumstatesaswellas secret classicaldata.Sucha**quantumsecret sharing**(abbreviatedQSS) scheme might be useful for sharing quantum keys, such as those used in quantum key distribu‐ tion or in other quantum cryptographic protocols. In addition, QSS might allow us to take advantageoftheadditionalpowerofquantumcomputationinsecuredistributedcomputations.

Imagine that it is fifteen years from now and someone announces the successful construction of a large quantum computer. The New York Times runs a front-page article reporting that all of the public-key algorithms used to protect the Internet have been broken by quantum computer. Perhaps, after seeing quantum computers destroy RSA and DSA and ECDSA, Internet users will leap to the conclusion that cryptography is dead. For solving the problem, some researchers provided the idea about **post-quantum cryptography** which refers to research on cryptographic primitives (usually public-key cryptosystems) that are not breaka‐ ble using quantum computers. This term came about because most currently popular publickey cryptosystems rely on the integer factorization problem or discrete logarithm problem, both of which would be easily solvable on large enough quantum computers using Shor's algorithm [6] [7]. Even though current publicly known experimental quantum computing is nowhere near powerful enough to attack real cryptosystems, many cryptographers are researching new algorithms, in case quantum computing becomes a threat in the future. This work is popularized by the PQCrypto conference series since 2006.

In the past few years, a remarkable surge of interest in the international scientific and industrial community has propelled quantum cryptography into mainstream computer science and physics. Furthermore, quantum cryptography is becoming increasingly practical at a fast pace. The first quantum key distribution prototype [2] worked over a distance of 32 centimeters in 1989. Two additional experimental demonstrations have been set up since, which work over significant lengths of optical fibre [8] [9]. The highest bit rate system currently demonstrated exchanges secure keys at 1 Mbit/s (over 20 km of optical fibre) and 10 kbit/s (over 100 km of fibre), achieved by a collaboration between the University of Cambridge and Toshiba using the BB84 protocol with decoy pulses.

against this quantum attack – quantum key distribution (QKD). Based on the fundamental principles in quantum physics, QKD provides an unconditionally secure way to distribute random keys through insecure channels. The secure key generated by QKD could be further applied in the OTP scheme or other encryption algorithms to enhance information security. In this chapter, we will introduce the fundamental principles behind various QKD or QSS and

The counterintuitive predictions of quantum mechanics about correlated systems were first discussed by Albert Einstein in 1935, in a joint paper with Boris Podolsky and Nathan Rosen [10]. They demonstrated a thought experiment that attempted to show that quantum mechan‐

But flowing the EPR paper, Erwin Schrodinger wrote letter (in German) to Einstein in which he used the word Verschrankung (translated by himself as entanglement) "to describe the correlations between two particles that interact and then separate, as in the EPR experiment" [11]. He shortly thereafter published a seminal paper defining and discussing the notion, and

Entanglement is usually created by direct interactions between subatomic particles. These interactions can take numerous forms. One of the most commonly used methods is spontane‐ ous parametric down-conversion to generate a pair of photons entangled in polarization [12]. Other methods include the use of a fiber coupler to confine and mix photons, the use of quantum dots to trap electrons until decay occurs, the use of the Hong-Ou-Mandel effect, etc. In the earliest tests of Bell's theorem, the entangled particles were generated using atomic cascades. It is also possible to create entanglement between quantum systems that never

Consider two noninteracting systems *A* and *B*, with respective Hilbert spaces *HA* and *HB*. The Hilbert space of the composite system is the tensor product *HA* ⊗ *HB*. If the first system is in state |*ψ <sup>A</sup>* and the second in state |*ψ <sup>B</sup>*, the state of the composite system is |*ψ <sup>A</sup>* ⊗ |*ψ <sup>B</sup>*. States of the composite system which can be represented in this form are called separable states, or product states. Not all states are separable states. Fix a basis { |*i <sup>A</sup>*} for *HA* and a basis { | *j <sup>B</sup>*}

> , *AB ij A B i j*

*<sup>B</sup>* yielding |*<sup>ψ</sup> <sup>A</sup>* <sup>=</sup>∑ *<sup>i</sup>*

given two basis vectors { |0 *<sup>A</sup>*, |1 *<sup>A</sup>*} of *HA*and two basis vectors { |0 *<sup>B</sup>*, |1 *<sup>B</sup>*} of *HB*, the

= Ä å*Ci j* (1)

*<sup>A</sup>* <sup>|</sup>*<sup>i</sup> <sup>A</sup>* and |*<sup>ϕ</sup> <sup>B</sup>* <sup>=</sup>∑ *<sup>j</sup>*

*cj*

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 115

*<sup>B</sup>* | *j <sup>B</sup>*. It is

*ci*

*<sup>B</sup>* If a state is inseparable, it is called an entangled state. For example,

present the state-of-the art quantum cryptography technologies.

directly interacted, through the use of entanglement swapping.

for *HB*. The most general state in *HA* ⊗ *HB* is the form of

y

*Acj*

**2.1. Entanglement state**

ical theory was impossible.

terming it "entanglement".

This state is separable if *cij* =*ci*

following is an entangled state:

*Acj*

inseparable if *cij* ≠*ci*

AsofMarch2007thelongestdistanceoverwhichquantumkeydistributionhasbeendemonstrat‐ ed using optic fibre is 148.7 km, achieved by Los Alamos National Laboratory/NIST using the BB84protocol.Significantly,thisdistanceis longenoughforalmostallthespans foundintoday's fibre networks. The distance record for free space QKD is 144 km between two of the Canary Islands, achieved by a European collaboration using entangled photons (the Ekert scheme) in 2006, and using BB84 enhanced with decoy states in 2007. The experiments suggest transmis‐ siontosatellitesispossible,duetotheloweratmosphericdensityathigheraltitudes.Forexample although the minimum distance from the International Space Station to the ESA Space Debris Telescope is about 400 km, the atmospheric thickness is about an order of magnitude less than in the European experiment, thus yielding less attenuation compared to this experiment.

#### **2. Quantum cryptography fundamentals**

On a wider context, quantum cryptography is a branch of quantum information processing, which includes quantum computing, quantum measurements, and quantum teleportation. Quantum computation and quantum information is the study of the information processing tasks that can be accomplished using quantum mechanical systems.

Quantummechanics is amathematicalframeworkor set ofrules forthe constructionofphysical theories. The rules of quantum mechanics are simple but even experts find them counterintui‐ tive, and the earliest antecedents of quantum computation and quantum information may be foundinthelong-standingdesireofphysiciststobetterunderstandquantummechanics.Perhaps the most striking of these is the study of quantum entanglement. Entanglement is a uniquely quantum mechanical resource that plays a key role in many of the most interesting applica‐ tions of quantum computation and quantum information; entanglement is iron to the classical world's bronze age. In recent years there has been a tremendous effort trying to better under‐ stand the properties of entanglement considered as a fundamental resource of Nature, of comparable importance to energy, information, entropy, or any other fundamental resource. Although there is as yet no complete theory of entanglement, some progress has been made in understanding this strange property of quantum mechanics. It is hoped by many researchers that further study of the properties of entanglement will yield insights that facilitate the development of new applications in quantum computation and quantum information.

As we known, it is interesting to learn that one decade before people realized that a quantum computer could be used to break public key cryptography, they had already found a solution against this quantum attack – quantum key distribution (QKD). Based on the fundamental principles in quantum physics, QKD provides an unconditionally secure way to distribute random keys through insecure channels. The secure key generated by QKD could be further applied in the OTP scheme or other encryption algorithms to enhance information security. In this chapter, we will introduce the fundamental principles behind various QKD or QSS and present the state-of-the art quantum cryptography technologies.

#### **2.1. Entanglement state**

In the past few years, a remarkable surge of interest in the international scientific and industrial community has propelled quantum cryptography into mainstream computer science and physics. Furthermore, quantum cryptography is becoming increasingly practical at a fast pace. The first quantum key distribution prototype [2] worked over a distance of 32 centimeters in 1989. Two additional experimental demonstrations have been set up since, which work over significant lengths of optical fibre [8] [9]. The highest bit rate system currently demonstrated exchanges secure keys at 1 Mbit/s (over 20 km of optical fibre) and 10 kbit/s (over 100 km of fibre), achieved by a collaboration between the University of Cambridge and Toshiba using

114 Theory and Practice of Cryptography and Network Security Protocols and Technologies

AsofMarch2007thelongestdistanceoverwhichquantumkeydistributionhasbeendemonstrat‐ ed using optic fibre is 148.7 km, achieved by Los Alamos National Laboratory/NIST using the BB84protocol.Significantly,thisdistanceis longenoughforalmostallthespans foundintoday's fibre networks. The distance record for free space QKD is 144 km between two of the Canary Islands, achieved by a European collaboration using entangled photons (the Ekert scheme) in 2006, and using BB84 enhanced with decoy states in 2007. The experiments suggest transmis‐ siontosatellitesispossible,duetotheloweratmosphericdensityathigheraltitudes.Forexample although the minimum distance from the International Space Station to the ESA Space Debris Telescope is about 400 km, the atmospheric thickness is about an order of magnitude less than in the European experiment, thus yielding less attenuation compared to this experiment.

On a wider context, quantum cryptography is a branch of quantum information processing, which includes quantum computing, quantum measurements, and quantum teleportation. Quantum computation and quantum information is the study of the information processing

Quantummechanics is amathematicalframeworkor set ofrules forthe constructionofphysical theories. The rules of quantum mechanics are simple but even experts find them counterintui‐ tive, and the earliest antecedents of quantum computation and quantum information may be foundinthelong-standingdesireofphysiciststobetterunderstandquantummechanics.Perhaps the most striking of these is the study of quantum entanglement. Entanglement is a uniquely quantum mechanical resource that plays a key role in many of the most interesting applica‐ tions of quantum computation and quantum information; entanglement is iron to the classical world's bronze age. In recent years there has been a tremendous effort trying to better under‐ stand the properties of entanglement considered as a fundamental resource of Nature, of comparable importance to energy, information, entropy, or any other fundamental resource. Although there is as yet no complete theory of entanglement, some progress has been made in understanding this strange property of quantum mechanics. It is hoped by many researchers that further study of the properties of entanglement will yield insights that facilitate the

development of new applications in quantum computation and quantum information.

As we known, it is interesting to learn that one decade before people realized that a quantum computer could be used to break public key cryptography, they had already found a solution

the BB84 protocol with decoy pulses.

**2. Quantum cryptography fundamentals**

tasks that can be accomplished using quantum mechanical systems.

The counterintuitive predictions of quantum mechanics about correlated systems were first discussed by Albert Einstein in 1935, in a joint paper with Boris Podolsky and Nathan Rosen [10]. They demonstrated a thought experiment that attempted to show that quantum mechan‐ ical theory was impossible.

But flowing the EPR paper, Erwin Schrodinger wrote letter (in German) to Einstein in which he used the word Verschrankung (translated by himself as entanglement) "to describe the correlations between two particles that interact and then separate, as in the EPR experiment" [11]. He shortly thereafter published a seminal paper defining and discussing the notion, and terming it "entanglement".

Entanglement is usually created by direct interactions between subatomic particles. These interactions can take numerous forms. One of the most commonly used methods is spontane‐ ous parametric down-conversion to generate a pair of photons entangled in polarization [12]. Other methods include the use of a fiber coupler to confine and mix photons, the use of quantum dots to trap electrons until decay occurs, the use of the Hong-Ou-Mandel effect, etc. In the earliest tests of Bell's theorem, the entangled particles were generated using atomic cascades. It is also possible to create entanglement between quantum systems that never directly interacted, through the use of entanglement swapping.

Consider two noninteracting systems *A* and *B*, with respective Hilbert spaces *HA* and *HB*. The Hilbert space of the composite system is the tensor product *HA* ⊗ *HB*. If the first system is in state |*ψ <sup>A</sup>* and the second in state |*ψ <sup>B</sup>*, the state of the composite system is |*ψ <sup>A</sup>* ⊗ |*ψ <sup>B</sup>*. States of the composite system which can be represented in this form are called separable states, or product states. Not all states are separable states. Fix a basis { |*i <sup>A</sup>*} for *HA* and a basis { | *j <sup>B</sup>*} for *HB*. The most general state in *HA* ⊗ *HB* is the form of

$$\left\|\boldsymbol{\nu}\right\rangle\_{AB} = \sum\_{i,j} \mathbb{C}\_{ij} \left| i \right\rangle\_{A} \otimes \left| j \right\rangle\_{B} \tag{1}$$

This state is separable if *cij* =*ci Acj <sup>B</sup>* yielding |*<sup>ψ</sup> <sup>A</sup>* <sup>=</sup>∑ *<sup>i</sup> ci <sup>A</sup>* <sup>|</sup>*<sup>i</sup> <sup>A</sup>* and |*<sup>ϕ</sup> <sup>B</sup>* <sup>=</sup>∑ *<sup>j</sup> cj <sup>B</sup>* | *j <sup>B</sup>*. It is inseparable if *cij* ≠*ci Acj <sup>B</sup>* If a state is inseparable, it is called an entangled state. For example, given two basis vectors { |0 *<sup>A</sup>*, |1 *<sup>A</sup>*} of *HA*and two basis vectors { |0 *<sup>B</sup>*, |1 *<sup>B</sup>*} of *HB*, the following is an entangled state:

$$\frac{1}{\sqrt{2}}(|0\rangle\_A|0\rangle\_B + |1\rangle\_A|1\rangle\_B) \tag{2}$$

to learn information about a key in a QKD process will lead to disturbance, which can be detected by Alice and Bob who can, for example, check the bit error rate of a random sample

The quantum no-cloning theorem was stated by Wootters, Zurek, and Dieks in 1982, and has

**Theorem (Quantum no-cloning theorem)** An arbitrary quantum state cannot be duplicated

**Proof:** Suppose the state of a quantum system A, which we wish to copy, is |*ψ <sup>A</sup>*. In order to make a copy, we take a system B with the same state space and initial state |*e <sup>B</sup>*. The initial, or blank, state must be independent of |*ψ <sup>A</sup>*, of which we have no prior knowledge. The

There are only two ways to manipulate the composite system. We could perform an observa‐ tion, which irreversibly collapses the system into some eigenstate of the observable, corrupting the information contained in the qubit. This is obviously not what we want. Alternatively, we could control the Hamiltonian of the system, and thus the time evolution operator *U* (for a time independent Hamiltonian, *U* (*t*)=*e* <sup>−</sup>*iHt*/ℏ, where −*H* / ℏ is called the generator of transla‐ tions in time) up to some fixed time interval, which yields a unitary operator. Then *U* acts as

composite system is then described by the tensor product, and its state is |*ψ <sup>A</sup>* |*e <sup>B</sup>*.

, *AB A B U e*

† , *B A AB B A AB B A A B e e e UU e*

 y

and since quantum mechanical states are assumed to be normalized, it follows that

This implies that either *ϕ* =*ψ* (in which case *ϕ* |*ψ* =1) or *ϕ* is orthogonal to *ψ* (in which case *ϕ* |*ψ* =0 ). However, this is not the case for two arbitrary states. While orthogonal states in a

Quantum no-cloning theorem is a direct result of the linearity of quantum physics. It is closely related to another important theorem in quantum mechanics, which states: if a measurement

2

y

= = (6)

 

for all possible states |*ϕ* in the state space (including |*ψ* ). Since *U* is unitary, it preserves

= (5)

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 117

 y

(|0 + |1 ) and |*ψ* =<sup>1</sup>

, this result does not hold for more general quantum

2

(|0 − |1 )

 

profound implications in quantum computing and related fields.

of the raw transmission data.

perfectly.

a copier provided that

the inner product:

*ϕ* |*ψ* = *ϕ* |*ψ* <sup>2</sup>

y

fit the requirement that *ϕ* |*ψ* = *ϕ* |*ψ* <sup>2</sup>

specifically chosen basis {|0 , |1 }, for example, |*ϕ* =<sup>1</sup>

states. Apparently *U* cannot clone a general quantum state.

.

**2.3. Quantum no-cloning theorem**

If the composite system is in this state, it is impossible to attribute to either system *A* or system *B* a definite pure state. Another way to say this is that while the von Neumann entropy of the whole state is zero, the entropy of the subsystems is greater than zero. In this sense, the systems are "entangled". This has specific empirical ramifications for interferometry [13]. It is worth‐ while to note that the above example is one of four Bell states, which are maximally entangled pure states.

#### **2.2. One-time-pad and key distribution problem**

In conventional cryptography, an unbreakable code does exist. It is called the one-time-pad and was invented by Gilbert Vernam in 1918 [14]. In the one-time-pad method, a message (traditionally called the plain text) is first converted by Alice into a binary form (a string consisting of "0"s and "1"s) by a publicly known method. A key is a binary string of the same length as the message. By combining each bit of the message with the respective bit of the key using XOR (i.e. addition modulo two), Alice converts the plain text into an encrypted form (called the cipher text). i.e. for each bit

$$c\_i \equiv m\_i + k\_i \pmod{\ 2}.\tag{3}$$

Alice then transmits the cipher text to Bob via a broadcast channel. Anyone including an eavesdropper can get a copy of the cipher text. However, without the knowledge of the key, the cipher text is totally random and gives no information whatsoever about the plain text. For decryption, Bob, who shares the same key with Alice, can perform another XOR (i.e. addition modulo two) between each bit of the cipher text with the respective bit of the key to recover the plain text. This is because

$$c\_i \equiv m\_i + k\_i \equiv m\_i + \mathcal{D}k\_i \equiv m\_i \text{(mod } \quad \mathcal{D} \text{)}.\tag{4}$$

The one-time-pad method is unbreakable, but it has a serious drawback: it supposes that Alice and Bob initially share a random string of secret that is as long as the message. Therefore, the one-time-pad simply shifts the problem of secure communication to the problem of key distribution. This is the key distribution problem. The one of possible solution to the key distribution problem is public key cryptography.

Quantum mechanics can provide a solution to the key distribution problem. In quantum key distribution, an encryption key is generated randomly between Alice and Bob by using non orthogonal quantum states. In quantum mechanics there is a quantum no-cloning theorem, which states that it is fundamentally impossible for anyone including an eavesdropper to make an additional copy of an unknown quantum state. Therefore, any attempt by an eavesdropper to learn information about a key in a QKD process will lead to disturbance, which can be detected by Alice and Bob who can, for example, check the bit error rate of a random sample of the raw transmission data.

#### **2.3. Quantum no-cloning theorem**

<sup>1</sup> (0 0 1 1 )

116 Theory and Practice of Cryptography and Network Security Protocols and Technologies

pure states.

**2.2. One-time-pad and key distribution problem**

(called the cipher text). i.e. for each bit

the plain text. This is because

distribution problem is public key cryptography.

If the composite system is in this state, it is impossible to attribute to either system *A* or system *B* a definite pure state. Another way to say this is that while the von Neumann entropy of the whole state is zero, the entropy of the subsystems is greater than zero. In this sense, the systems are "entangled". This has specific empirical ramifications for interferometry [13]. It is worth‐ while to note that the above example is one of four Bell states, which are maximally entangled

In conventional cryptography, an unbreakable code does exist. It is called the one-time-pad and was invented by Gilbert Vernam in 1918 [14]. In the one-time-pad method, a message (traditionally called the plain text) is first converted by Alice into a binary form (a string consisting of "0"s and "1"s) by a publicly known method. A key is a binary string of the same length as the message. By combining each bit of the message with the respective bit of the key using XOR (i.e. addition modulo two), Alice converts the plain text into an encrypted form

Alice then transmits the cipher text to Bob via a broadcast channel. Anyone including an eavesdropper can get a copy of the cipher text. However, without the knowledge of the key, the cipher text is totally random and gives no information whatsoever about the plain text. For decryption, Bob, who shares the same key with Alice, can perform another XOR (i.e. addition modulo two) between each bit of the cipher text with the respective bit of the key to recover

The one-time-pad method is unbreakable, but it has a serious drawback: it supposes that Alice and Bob initially share a random string of secret that is as long as the message. Therefore, the one-time-pad simply shifts the problem of secure communication to the problem of key distribution. This is the key distribution problem. The one of possible solution to the key

Quantum mechanics can provide a solution to the key distribution problem. In quantum key distribution, an encryption key is generated randomly between Alice and Bob by using non orthogonal quantum states. In quantum mechanics there is a quantum no-cloning theorem, which states that it is fundamentally impossible for anyone including an eavesdropper to make an additional copy of an unknown quantum state. Therefore, any attempt by an eavesdropper

<sup>2</sup> *A B AB* <sup>+</sup> (2)

(mod 2). *i ii cmk* º + (3)

2 (mod 2). *i ii i i i cmkm km* º +º + º (4)

The quantum no-cloning theorem was stated by Wootters, Zurek, and Dieks in 1982, and has profound implications in quantum computing and related fields.

**Theorem (Quantum no-cloning theorem)** An arbitrary quantum state cannot be duplicated perfectly.

**Proof:** Suppose the state of a quantum system A, which we wish to copy, is |*ψ <sup>A</sup>*. In order to make a copy, we take a system B with the same state space and initial state |*e <sup>B</sup>*. The initial, or blank, state must be independent of |*ψ <sup>A</sup>*, of which we have no prior knowledge. The composite system is then described by the tensor product, and its state is |*ψ <sup>A</sup>* |*e <sup>B</sup>*.

There are only two ways to manipulate the composite system. We could perform an observa‐ tion, which irreversibly collapses the system into some eigenstate of the observable, corrupting the information contained in the qubit. This is obviously not what we want. Alternatively, we could control the Hamiltonian of the system, and thus the time evolution operator *U* (for a time independent Hamiltonian, *U* (*t*)=*e* <sup>−</sup>*iHt*/ℏ, where −*H* / ℏ is called the generator of transla‐ tions in time) up to some fixed time interval, which yields a unitary operator. Then *U* acts as a copier provided that

$$\left|\mathcal{U}\left|\phi\right\rangle\_{A}\right|e\rangle\_{B} = \left|\phi\right\rangle\_{A}\left|\phi\right\rangle\_{B}.\tag{5}$$

for all possible states |*ϕ* in the state space (including |*ψ* ). Since *U* is unitary, it preserves the inner product:

$$
\left\langle e \right|\_{B} \left\langle \phi \right|\_{A} \left| \nu \right\rangle\_{A} \left| e \right\rangle\_{B} = \left\langle e \right|\_{B} \left\langle \phi \right|\_{A} \llcorner \mathcal{U}^{\dagger} \mathcal{U} \left| \nu \right\rangle\_{A} \left| e \right\rangle\_{B} = \left\langle \phi \right|\_{B} \left\langle \phi \right|\_{A} \left| \nu \right\rangle\_{A} \left| \nu \right\rangle\_{B} \left| \nu \right\rangle\_{B} \tag{6}
$$

and since quantum mechanical states are assumed to be normalized, it follows that *ϕ* |*ψ* = *ϕ* |*ψ* <sup>2</sup> .

This implies that either *ϕ* =*ψ* (in which case *ϕ* |*ψ* =1) or *ϕ* is orthogonal to *ψ* (in which case *ϕ* |*ψ* =0 ). However, this is not the case for two arbitrary states. While orthogonal states in a specifically chosen basis {|0 , |1 }, for example, |*ϕ* =<sup>1</sup> 2 (|0 + |1 ) and |*ψ* =<sup>1</sup> 2 (|0 − |1 ) fit the requirement that *ϕ* |*ψ* = *ϕ* |*ψ* <sup>2</sup> , this result does not hold for more general quantum states. Apparently *U* cannot clone a general quantum state.

Quantum no-cloning theorem is a direct result of the linearity of quantum physics. It is closely related to another important theorem in quantum mechanics, which states: if a measurement allows one to gain information about the state of a quantum system, then in general the state of this quantum system will be disturbed, unless we know in advance that the possible states of the original quantum system are orthogonal to each other.

leads to an increasing uncertainty in being able to measure the particle's momentum to an

Suppose *A* and *B* are two Hermitian operators, and |*ψ* is a quantum state. Suppose *ψ*| *AB* |*ψ* = *x* + *iy*, where *x* and *y* are real. Note that *ψ*| *A*, *B* |*ψ* =2*iy* and

> y

By the Cauchy-Schwarz inequality | *ψ*| *AB* |*ψ* | <sup>2</sup> ≤ *ψ*| *A*<sup>2</sup> |*ψ ψ*| *B* <sup>2</sup> |*ψ* , which combined

2 2 2

Suppose *C* and *D* are two observables. Substituting *A*=*C* − <*C* > and *B* =*D* − <*D* > into the last equation, where the average value of the observable *C* is often written <*C* > = *ψ*|*C* |*ψ* and

*C D*

Quantum communication the sending of encoded messages that are un-hackable by any computer. This i allows s possible because the messages are carried by tiny particles of light called photons. If an eavesdropper attempts to read out the message in transit, they will be discovered by the disturbance their measurement causes to the particles as an inevitable consequence of the HUP. In the regime of quantum experiments, by contrast, we are uncertain about the results of experiments because the particle itself is uncertain. It has no position or speed until we measure it. We can design some protocol of quantum cryptography by using

The first attempt of using quantum mechanics to achieve missions impossible in classical information started in the early 70's. Stephen Wiesner proposed two communication modali‐ ties not allowed by classical physics: "quantum multiplexing" channel and counterfeit-free bank-note. Unfortunately, his paper was rejected and couldn't be published until a decade later. In 1980's, Charles H.Bennett and Gilles Brassard extended Wiesner's idea and applied it to solve the key distribution problem in classical cryptography. In 1984, the well known BB84 QKD protocol was published [15]. QKD is a new tool in the cryptographer's toolbox: it allows for secure key agreement over an untrusted channel where the output key is entirely inde‐

 y

 yy

 y

, ()( ) . <sup>2</sup>

y

similar to *D*, we obtain Heisenberg's uncertainty principle as it is usually stated

2 22

 yyé ù *A B*, + = { ,} 4 . *A B AB* ë û (7)

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 119

 yé ù *A B*,4 . £ *A B* ë û (8)

é ù ë û DD ³ (9)

equally high degree of accuracy.

*ψ*|{*A*, *B*}|*ψ* =2*x*. This implies that

the property of quantum from HUP.

**3. Quantum key distribution**

y

 y

with the equation (1) and dropping a non-negative term gives

 y

*C D*

y

 y

At first sight, the impossibility of making perfect copies of unknown quantum states seems to be a shortcoming. Surprisingly, it can also be an advantage. It turned out that by using this impossibility smartly, unconditionally secure key distribution could be achieved: any attempts by the eavesdropper to learn the information encoded quantum mechanically will disturb the quantum state and expose her existence. Specially, we can get the following characteristics about quantum no-cloning theorem:


#### **2.4. Heisenberg uncertainty principle**

**Heisenberg's Uncertainty Principle** (abbreviated HUP) is one of the fundamental concepts of quantum physics, and is the basis for the initial realization of fundamental uncertainties in the ability of an experimenter to measure more than one quantum variable at a time. Attempting to measure an elementary particle's position to the highest degree of accuracy, for example, leads to an increasing uncertainty in being able to measure the particle's momentum to an equally high degree of accuracy.

Suppose *A* and *B* are two Hermitian operators, and |*ψ* is a quantum state. Suppose *ψ*| *AB* |*ψ* = *x* + *iy*, where *x* and *y* are real. Note that *ψ*| *A*, *B* |*ψ* =2*iy* and *ψ*|{*A*, *B*}|*ψ* =2*x*. This implies that

2 22 y y y y yyé ù *A B*, + = { ,} 4 . *A B AB* ë û (7)

By the Cauchy-Schwarz inequality | *ψ*| *AB* |*ψ* | <sup>2</sup> ≤ *ψ*| *A*<sup>2</sup> |*ψ ψ*| *B* <sup>2</sup> |*ψ* , which combined with the equation (1) and dropping a non-negative term gives

$$\left| \left\langle \psi \left| \left[ \underline{A} , \mathcal{B} \right] \right| \psi \right\rangle \right|^2 \le 4 \left\langle \psi \left| A^2 \right| \psi \right\rangle \left\langle \psi \left| \mathcal{B}^2 \right| \psi \right\rangle. \tag{8}$$

Suppose *C* and *D* are two observables. Substituting *A*=*C* − <*C* > and *B* =*D* − <*D* > into the last equation, where the average value of the observable *C* is often written <*C* > = *ψ*|*C* |*ψ* and similar to *D*, we obtain Heisenberg's uncertainty principle as it is usually stated

$$
\Delta(\mathcal{C})\Delta(D) \ge \frac{\left| \left< \nu \left| \left[ \mathcal{C}, D \right.\right] \middle| \nu \right> \right|}{2}. \tag{9}
$$

Quantum communication the sending of encoded messages that are un-hackable by any computer. This i allows s possible because the messages are carried by tiny particles of light called photons. If an eavesdropper attempts to read out the message in transit, they will be discovered by the disturbance their measurement causes to the particles as an inevitable consequence of the HUP. In the regime of quantum experiments, by contrast, we are uncertain about the results of experiments because the particle itself is uncertain. It has no position or speed until we measure it. We can design some protocol of quantum cryptography by using the property of quantum from HUP.

#### **3. Quantum key distribution**

allows one to gain information about the state of a quantum system, then in general the state of this quantum system will be disturbed, unless we know in advance that the possible states

At first sight, the impossibility of making perfect copies of unknown quantum states seems to be a shortcoming. Surprisingly, it can also be an advantage. It turned out that by using this impossibility smartly, unconditionally secure key distribution could be achieved: any attempts by the eavesdropper to learn the information encoded quantum mechanically will disturb the quantum state and expose her existence. Specially, we can get the following characteristics

**•** The no-cloning theorem prevents us from using classical error correction techniques on quantum states. For example, we cannot create backup copies of a state in the middle of a quantum computation, and use them to correct subsequent errors. Error correction is vital for practical quantum computing, and for some time this was thought to be a fatal limitation. In 1995, Shor and Steane revived the prospects of quantum computing by independently devising the first quantum error correcting codes, which circumvent the no-cloning

**•** Similarly, cloning would violate the no teleportation theorem, which says classical telepor‐ tation (not to be confused with entanglement-assisted teleportation) is impossible. In other

**•** The no-cloning theorem does not prevent superluminal communication via quantum entanglement, as cloning is a sufficient condition for such communication, but not a necessary one. Nevertheless, consider the EPR thought experiment, and suppose quantum states could be cloned. Assume parts of a maximally entangled Bell state are distributed to Alice and Bob. Alice could send bits to Bob in the following way: If Alice wishes to transmit a "0", she measures the spin of her electron in the z direction, collapsing Bob's state to either | *z* + *<sup>B</sup>* or | *z* − *<sup>B</sup>*. To transmit "1", Alice does nothing to her qubit. Bob creates many copies of his electron's state, and measures the spin of each copy in the z direction. Bob will know that Alice has transmitted a "0" if all his measurements will produce the same result; otherwise, his measurements will have outcomes +1/2 and −1/2 with equal probability. This

**•** The no-cloning theorem prevents us from viewing the holographic principle for black holes as meaning we have two copies of information lying at the event horizon and the black hole interior simultaneously. This leads us to more radical interpretations like black hole

**Heisenberg's Uncertainty Principle** (abbreviated HUP) is one of the fundamental concepts of quantum physics, and is the basis for the initial realization of fundamental uncertainties in the ability of an experimenter to measure more than one quantum variable at a time. Attempting to measure an elementary particle's position to the highest degree of accuracy, for example,

would allow Alice and Bob to communicate across space-like separations.

of the original quantum system are orthogonal to each other.

118 Theory and Practice of Cryptography and Network Security Protocols and Technologies

words, quantum states cannot be measured reliably.

about quantum no-cloning theorem:

theorem.

complementarity.

**2.4. Heisenberg uncertainty principle**

The first attempt of using quantum mechanics to achieve missions impossible in classical information started in the early 70's. Stephen Wiesner proposed two communication modali‐ ties not allowed by classical physics: "quantum multiplexing" channel and counterfeit-free bank-note. Unfortunately, his paper was rejected and couldn't be published until a decade later. In 1980's, Charles H.Bennett and Gilles Brassard extended Wiesner's idea and applied it to solve the key distribution problem in classical cryptography. In 1984, the well known BB84 QKD protocol was published [15]. QKD is a new tool in the cryptographer's toolbox: it allows for secure key agreement over an untrusted channel where the output key is entirely inde‐

**Figure 1.** Flow chart of the stages of a quantum key distribution protocol. Stages with double lines require classical authentication. [18]

**3.1. The BB84 QKD protocol**

**Table 1.** Procedure of BB84 protocol.

and 135-degrees.

**2.** Public discussion phase

signals.

of BB84 is as follows (also shown in Table 1).

and diagonal) to perform a measurement.

**4.** Bob broadcasts the polarizations of the test events.

**1.** Quantum communication phase

The best-known protocol for QKD is the Bennett and Brassard protocol (BB84). The procedure

Alice's bit sequence 0 1 1 1 0 1 0 0 0 1 Alice's basis + × + + × + × × + ×

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 121

Alice's photon polarization → ↖ ↑ ↑ ↗ ↑ ↗ ↗ → ↖

Bob's measured polarization → ↑ ↖ ↑ → ↗ ↗ ↑ → ↖ Bob's sifted measured polarization → ↑ ↗ → ↖

Bob's data sequence 0 1 0 0 1

Bob's basis + + × + + × × + + ×

**1.** In BB84, Alice sends Bob a sequence of photons through an *insecure quantum channel*, each independently chosen from one of the four polarizations-vertical, horizontal, 45-degrees

**2.** For each photon, Bob randomly chooses one of the two measurement bases (rectilinear

**3.** Bob records his measurement bases and results. Bob publicly acknowledge his receipt of

**1.** Alice broadcasts her bases of measurements. Bob broadcasts his bases of measurements.

**3.** To test for tampering, Alice randomly chooses a fraction, *p*, of all remaining events as test events. For those test events, she publicly broadcasts their positions and polarizations.

**5.** Alice and Bob compute the error rate of the test events (i.e., the fraction of data for which their value disagree). If the computed error rate is larger than some prescribed threshold

**6.** Alice and Bob each convert the polarization data of all remaining data into a binary string called a raw key (by, for example, mapping a vertical of 45-degrees photon to "0" and a horizontal or 135-degrees photon to "1"). The can perform classical postprocessing such

**2.** Alice and Bob discard all events where they use different bases for a signal.

value, say 11%, they abort. Otherwise, they proceed to the next step.

as error correction and privacy amplification to generate a final key.

pendent from any input value, a task that is impossible using classical cryptography. QKD does not eliminate the need for other cryptographic primitives, such as authentication, but it can be used to build systems with new security properties.

To conquer the errors made by noise and wiretapping in the quantum channel, unconditionally secure secret-key agreement over a public channel was designed, information reconciliation and privacy amplification can be used to quantum key distribution, or otherwise, quantum entanglement purification should be used. The first general although rather complex proof of unconditional security was given by Mayers [16], which was followed by a number of other proofs. In Mayers' proof, the BB84 scheme proposed by Bennett and Brassard was proved to be unconditionally secure. Building on the quantum privacy amplification idea, Lo and Chau, proposed a conceptually simpler proof of security [17].

In QKD, two parties, Alice and Bob, obtain some quantum states and measure them. They communicate (all communication form this point onwards is classical) to determine which of their measurement results could lead to secret key bits; some are discarded in a process called sifting because the measurement settings were incompatible. They perform error correction and then estimate a security parameter which describes how much information an eavesdrop‐ per might have about their key data. If this amount is above a certain threshold, then they abort as they cannot guarantee any secrecy whatsoever. If it is below the threshold, then they can apply privacy amplification to squeeze out any remaining information the eavesdropper might have, and arrive at a shared secret key. Some of this classical communication must be authen‐ ticated to avoid man-in-the-middle attacks. Some portions of the protocol can fail with negligible probability.

A flow chart describing the stages of quantum key distribution is given in Figure 1.


**Table 1.** Procedure of BB84 protocol.

#### **3.1. The BB84 QKD protocol**

pendent from any input value, a task that is impossible using classical cryptography. QKD does not eliminate the need for other cryptographic primitives, such as authentication, but it

**Figure 1.** Flow chart of the stages of a quantum key distribution protocol. Stages with double lines require classical

Key sifting/ reconciliation

Error correction

Privacy amplication Secret key

Yes

distillable

Security parameter estimation

Abort

To conquer the errors made by noise and wiretapping in the quantum channel, unconditionally secure secret-key agreement over a public channel was designed, information reconciliation and privacy amplification can be used to quantum key distribution, or otherwise, quantum entanglement purification should be used. The first general although rather complex proof of unconditional security was given by Mayers [16], which was followed by a number of other proofs. In Mayers' proof, the BB84 scheme proposed by Bennett and Brassard was proved to be unconditionally secure. Building on the quantum privacy amplification idea, Lo and Chau,

In QKD, two parties, Alice and Bob, obtain some quantum states and measure them. They communicate (all communication form this point onwards is classical) to determine which of their measurement results could lead to secret key bits; some are discarded in a process called sifting because the measurement settings were incompatible. They perform error correction and then estimate a security parameter which describes how much information an eavesdrop‐ per might have about their key data. If this amount is above a certain threshold, then they abort as they cannot guarantee any secrecy whatsoever. If it is below the threshold, then they can apply privacy amplification to squeeze out any remaining information the eavesdropper might have, and arrive at a shared secret key. Some of this classical communication must be authen‐ ticated to avoid man-in-the-middle attacks. Some portions of the protocol can fail with

A flow chart describing the stages of quantum key distribution is given in Figure 1.

can be used to build systems with new security properties.

proposed a conceptually simpler proof of security [17].

negligible probability.

Authentication key

authentication. [18]

Key confirmation

Quantum state transmission and measurement

120 Theory and Practice of Cryptography and Network Security Protocols and Technologies

Secret key

Yes No

The best-known protocol for QKD is the Bennett and Brassard protocol (BB84). The procedure of BB84 is as follows (also shown in Table 1).


The basic idea of the BB84 QKD protocol is beautiful and its security can be intuitively understood from the quantum no-cloning theorem. On the other hand, to apply QKD in practice, Alice and Bob need to find the upper bound of Eve's information quantitatively, given the observed quantum bit error rate (abbreviated QBER) and other system parameters. This is the primary goal of various QKD security proofs and it had turned out to be extremely difficult. One major challenge comes from the fact that Eve could launch attacks way beyond today's technologies and our imaginations. Nevertheless, QKD was proved to be unconditionally secure. This is most significant achievements in quantum information.

**4.** Alice and Bob tell each other which measurement types were used, and they keep the data

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 123

**5.** They convert the remaining data to a string of bits using a convention such as: left-circular

One important difference between the BB84 and the EPR methods is that with BB84, the key created by Alice and Bob must be stored classically until it is used. Therefore, although the key was completely secure when it was created, its continued security over time is only as great as the security of its storage. Using the EPR method, Alice and Bob could potentially store the prepared entangled particles and then measure them and create the key just before they were

So the idea consists in replacing the quantum channel carrying two qubits from Alice to Bob by a channel carrying two qubits from a common source, one qubit to Alice and one to Bob. A first possibility would be that the source always emits the two qubits in the same state chosen randomly among the four states of the BB84 protocol. Alice and Bob would then both measure their qubit in one of the two bases, again chosen independently and randomly. The source then announces the bases, and Alice and Bob keep the data only when they happen to have made their measurements in the compatible basis. If the source is reliable, this protocol is equivalent to that of BB84: It is as if the qubit propagates backwards in time from Alice to the source, and then forward to Bob. But better than trusting the source, which could be in Eve's hand the Ekert protocol assumes that the two qubits are emitted in a maximally entangled state like

Then, when Alice and Bob happen to use the same basis, either the *x* basis or the *y* basis, i.e., in about half of the cases, their results are identical, providing them with a common key.

In the BB84 QKD protocol, Alice's random bits are encoded in a two dimensional space like the polarization state of a single photon. More recently, QKD protocols working with contin‐ uous variables have been proposed. Among them, the Gaussian modulated coherent state

The protocol runs as follows. First, Alice draws two random numbers *xA* and *pA* from a gaussian distribution of mean zero and variance *VAN*0, where *N*0 denotes the shot-noise variance. Then, she sends the coherent state | *xA* + *i pA* to Bob, who randomly chooses to measure either quadrature *x* or *p*. Later, using a public authenticated channel, he informs Alice about which quadrature he measured, so she may discard the irrelevant data. After many similar exchanges, Alice and Bob (and possibly the eavesdropper Eve) share a set of correlated

from all particle pairs where they both chose the same measurement type.

= 0, right-circular = 1, horizontal = 0, vertical = 1.

going to use it, eliminating the problem of insecure storage.


2

(|00 + |11 ).

**3.3. Continuous variable QKD**

(GMCS) QKD protocol has drawn special attention [21].

gaussian variables, which we call 'key elements'.

The basic scheme of the GMCS QKD protocol can be shown in Figure 2.

#### **3.2. QKD based on EPR**

An essentially equivalent protocol that utilizes Einstein-Podolsky-Rosen (EPR) correlations has been worked on by Artur Ekert [19] and Bennett, Brassard, and Mermin [20]. To take advantage of EPR correlations, particles are prepared in such a way that they are "entangled". This means that although they may be separated by large distances in space, they are not independent of each other. Suppose the entangled particles are photons. If one of the particles is measured according to the rectilinear basis and found to have a vertical polarization, then the other particle will also be found to have a vertical polarization if it is measured according to the rectilinear basis. If however, the second particle is measured according to the circular basis, it may be found to have either left-circular or right-circular polarization.

In his 1991 paper, Ekert [19] suggested basing the security of this two-qubit protocol on Bell's inequality, an inequality which demonstrates that some correlations predicted by quantum mechanics cannot be reproduced by the local theory. To do this, Alice and Bob can use a third basis. In this way the probability that they might happen to choose the same basis is reduced from <sup>1</sup> 2 to <sup>2</sup> <sup>9</sup> , but at the same time as they establish a key, they collect enough data to test Bell's inequality. They can thus check that the source really emits the entangled state and not merely product states. The following year Bennett, Brassard, and Mermin [20] criticized Ekert's letter, arguing that the violation of Bell's inequality is not necessary for the security of quantum cryptography and emphasizing the close connection between the Ekert and the BB84 schemes. This criticism quantum cryptography might be missing an important point. Although the exact relation between security and Bell's inequality is not yet fully known, there are clear results establishing fascinating connections.

The steps of the protocol for developing a secret key using EPR correlations of entangled photons are explained below.


One important difference between the BB84 and the EPR methods is that with BB84, the key created by Alice and Bob must be stored classically until it is used. Therefore, although the key was completely secure when it was created, its continued security over time is only as great as the security of its storage. Using the EPR method, Alice and Bob could potentially store the prepared entangled particles and then measure them and create the key just before they were going to use it, eliminating the problem of insecure storage.

So the idea consists in replacing the quantum channel carrying two qubits from Alice to Bob by a channel carrying two qubits from a common source, one qubit to Alice and one to Bob. A first possibility would be that the source always emits the two qubits in the same state chosen randomly among the four states of the BB84 protocol. Alice and Bob would then both measure their qubit in one of the two bases, again chosen independently and randomly. The source then announces the bases, and Alice and Bob keep the data only when they happen to have made their measurements in the compatible basis. If the source is reliable, this protocol is equivalent to that of BB84: It is as if the qubit propagates backwards in time from Alice to the source, and then forward to Bob. But better than trusting the source, which could be in Eve's hand the Ekert protocol assumes that the two qubits are emitted in a maximally entangled state like |*ϕ*<sup>+</sup> =<sup>1</sup> 2 (|00 + |11 ).

Then, when Alice and Bob happen to use the same basis, either the *x* basis or the *y* basis, i.e., in about half of the cases, their results are identical, providing them with a common key.

#### **3.3. Continuous variable QKD**

The basic idea of the BB84 QKD protocol is beautiful and its security can be intuitively understood from the quantum no-cloning theorem. On the other hand, to apply QKD in practice, Alice and Bob need to find the upper bound of Eve's information quantitatively, given the observed quantum bit error rate (abbreviated QBER) and other system parameters. This is the primary goal of various QKD security proofs and it had turned out to be extremely difficult. One major challenge comes from the fact that Eve could launch attacks way beyond today's technologies and our imaginations. Nevertheless, QKD was proved to be unconditionally

An essentially equivalent protocol that utilizes Einstein-Podolsky-Rosen (EPR) correlations has been worked on by Artur Ekert [19] and Bennett, Brassard, and Mermin [20]. To take advantage of EPR correlations, particles are prepared in such a way that they are "entangled". This means that although they may be separated by large distances in space, they are not independent of each other. Suppose the entangled particles are photons. If one of the particles is measured according to the rectilinear basis and found to have a vertical polarization, then the other particle will also be found to have a vertical polarization if it is measured according to the rectilinear basis. If however, the second particle is measured according to the circular

In his 1991 paper, Ekert [19] suggested basing the security of this two-qubit protocol on Bell's inequality, an inequality which demonstrates that some correlations predicted by quantum mechanics cannot be reproduced by the local theory. To do this, Alice and Bob can use a third basis. In this way the probability that they might happen to choose the same basis is reduced

inequality. They can thus check that the source really emits the entangled state and not merely product states. The following year Bennett, Brassard, and Mermin [20] criticized Ekert's letter, arguing that the violation of Bell's inequality is not necessary for the security of quantum cryptography and emphasizing the close connection between the Ekert and the BB84 schemes. This criticism quantum cryptography might be missing an important point. Although the exact relation between security and Bell's inequality is not yet fully known, there are clear results

The steps of the protocol for developing a secret key using EPR correlations of entangled

**1.** Alice creates EPR pairs of polarized photons, keeping one particle for herself and sending

**2.** Alice randomly measures the polarization of each particle she kept according to the rectilinear or circular basis. She records each measurement type and the polarization

**3.** Bob randomly measures each particle he received according to the rectilinear or circular

basis. He records each measurement type and the polarization measured.

<sup>9</sup> , but at the same time as they establish a key, they collect enough data to test Bell's

secure. This is most significant achievements in quantum information.

122 Theory and Practice of Cryptography and Network Security Protocols and Technologies

basis, it may be found to have either left-circular or right-circular polarization.

**3.2. QKD based on EPR**

from <sup>1</sup>

2 to <sup>2</sup>

establishing fascinating connections.

the other particle of each pair to Bob.

photons are explained below.

measured.

In the BB84 QKD protocol, Alice's random bits are encoded in a two dimensional space like the polarization state of a single photon. More recently, QKD protocols working with contin‐ uous variables have been proposed. Among them, the Gaussian modulated coherent state (GMCS) QKD protocol has drawn special attention [21].

The protocol runs as follows. First, Alice draws two random numbers *xA* and *pA* from a gaussian distribution of mean zero and variance *VAN*0, where *N*0 denotes the shot-noise variance. Then, she sends the coherent state | *xA* + *i pA* to Bob, who randomly chooses to measure either quadrature *x* or *p*. Later, using a public authenticated channel, he informs Alice about which quadrature he measured, so she may discard the irrelevant data. After many similar exchanges, Alice and Bob (and possibly the eavesdropper Eve) share a set of correlated gaussian variables, which we call 'key elements'.

The basic scheme of the GMCS QKD protocol can be shown in Figure 2.

role: it reduces the communication efficiency but it will not introduce QBER. A photon is either lost in the channel, in which case Bob will not register anything, or it will reach Bob's detector intact. On the other hand, in the GMCS QKD, the channel loss will introduce vacuum noise and reduce the correlation between Alice and Bob's data. As the channel loss increases, the vacuum noise will become so high that it is impossible for Alice and Bob to resolve a small excess noise (which is used to upper bound Eve's information) on the top of a huge vacuum

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 125

Comparing with the BB84 QKD, the GMCS QKD could yield a high secure key rate over short

The security of QKD has been rigorously proven in a number of recent papers. There has been tremendous interest in experimental QKD [26] [27]. Unfortunately, all those exciting recent experiments are, in principle, insecure due to real-life imperfections. More concretely, highly attenuated lasers are often used as sources. But, these sources sometimes produce signals that contain more than one photon. Those multi-photon signals open the door to powerful new eavesdropping attacks including photon splitting attack. For example, Eve can, in principle, measure the photon number of each signal emitted by Alice and selectively suppress single photon signals. She splits multi-photon signals, keeping one copy for herself and sending one copy to Bob. Now, since Eve has an identical copy of what Bob possesses, the unconditional

In summary, in standard BB84 protocol, only signals originated from single photon pulses emitted by Alice are guaranteed to be secure. Consequently, paraphrasing GLLP (Gottesman, Lo, Lutkenhaus, Preskill [28]), the secure key generation rate (per signal state emitted by Alice)

where *Qμ* and *Eμ* are respectively the gain and quantum bit error rate (QBER) of the signal state (Here, the gain means the ratio of the number of Bob's detection events (where Bob chooses the same basis as Alice) to Alice's number of emitted signals. QBER means the error rate of Bob's detection events for the case that Alice and Bob use the same basis), *Ω* and *e*1 are respectively the fraction and QBER of detection events by Bob that have originated from singlephoton signals emitted by Alice and *H*<sup>2</sup> is the binary Shannon entropy. It is a prior very hard to obtain a good lower bound on *Ω* and a good upper bound on *e*1. Therefore, prior art methods (as in GLLP [28], under (semi-) realistic assumptions, if imperfections are sufficiently small, then BB84 is unconditionally secure.) make the most pessimistic assumption that all multiphoton signals emitted by Alice will be received by Bob. For this reason, until now, it has been widely believed that the demand for unconditional security will severely reduce the perform‐

³ - +W - (10)

<sup>2</sup> 2 1 *SQ HE He* { ( ) [1 ( )]},

 m

m

noise.

distances [24] [25].

**3.4. Decoy state QKD**

security of QKD is completely compromised.

can be shown to be given by:

ance of QKD systems.

**Figure 2.** *The Gaussian modulated coherent state (GMCS) QKD. X: amplitude quadrature; P: phase quadrature.* [22]

Alice modulates both the amplitude quadrature and phase quadrature of a coherent state with Gaussian distributed random numbers. In classical electromagnetism, these two quadratures correspond to the in-phase and out-of-phase components of electric field, which can be conveniently modulated with optical phase and amplitude modulators. Alice sends the modulated coherent state together with a strong local oscillator (a strong laser pulse which serves as a phase reference) to Bob. Bob randomly measures one of the two quadratures with a phase modulator and a homodyne detector. After performing his measurements, Bob informs Alice which quadrature he actually measures for each pulse and Alice drops the irrelevant data. At this stage, they share a set of correlated Gaussian variables which are called the ― raw key. Given the variances of the measurement results below certain thresholds, they can further work out perfectly correlated secure key by performing reconciliation and privacy amplification. Classical data processing is then necessary for Alice and Bob to obtain a fully secret binary key.

The security of the GMCS QKD can be comprehended from the uncertainty principle. In quantum optics, the amplitude quadrature and phase quadrature of a coherent state form a pair of conjugate variables, which cannot be simultaneously determined with arbitrarily high accuracies due to Heisenberg uncertainty principle. From the observed variance in one quadrature, Alice and Bob can upper bound Eve's information about the other quadrature. This provides a way to verify the security of the generated key. Recently, an unconditional security proof of the GMCS QKD appeared [23].

Different from the BB84 QKD, in GMCS QKD, homodyne detectors are employed to measure electric fields rather than photon energy. By using a strong local oscillator, high efficiency and fast photo diodes can be used to construct the homodyne detector which could result in a high secure key generation rate. However, the performance of the GMCS QKD is strongly depend‐ ent on the channel loss. Recall that in the BB84 QKD system, the channel loss plays a simple role: it reduces the communication efficiency but it will not introduce QBER. A photon is either lost in the channel, in which case Bob will not register anything, or it will reach Bob's detector intact. On the other hand, in the GMCS QKD, the channel loss will introduce vacuum noise and reduce the correlation between Alice and Bob's data. As the channel loss increases, the vacuum noise will become so high that it is impossible for Alice and Bob to resolve a small excess noise (which is used to upper bound Eve's information) on the top of a huge vacuum noise.

Comparing with the BB84 QKD, the GMCS QKD could yield a high secure key rate over short distances [24] [25].

#### **3.4. Decoy state QKD**

**Figure 2.** *The Gaussian modulated coherent state (GMCS) QKD. X: amplitude quadrature; P: phase quadrature.* [22]

124 Theory and Practice of Cryptography and Network Security Protocols and Technologies

secret binary key.

security proof of the GMCS QKD appeared [23].

Alice modulates both the amplitude quadrature and phase quadrature of a coherent state with Gaussian distributed random numbers. In classical electromagnetism, these two quadratures correspond to the in-phase and out-of-phase components of electric field, which can be conveniently modulated with optical phase and amplitude modulators. Alice sends the modulated coherent state together with a strong local oscillator (a strong laser pulse which serves as a phase reference) to Bob. Bob randomly measures one of the two quadratures with a phase modulator and a homodyne detector. After performing his measurements, Bob informs Alice which quadrature he actually measures for each pulse and Alice drops the irrelevant data. At this stage, they share a set of correlated Gaussian variables which are called the ― raw key. Given the variances of the measurement results below certain thresholds, they can further work out perfectly correlated secure key by performing reconciliation and privacy amplification. Classical data processing is then necessary for Alice and Bob to obtain a fully

The security of the GMCS QKD can be comprehended from the uncertainty principle. In quantum optics, the amplitude quadrature and phase quadrature of a coherent state form a pair of conjugate variables, which cannot be simultaneously determined with arbitrarily high accuracies due to Heisenberg uncertainty principle. From the observed variance in one quadrature, Alice and Bob can upper bound Eve's information about the other quadrature. This provides a way to verify the security of the generated key. Recently, an unconditional

Different from the BB84 QKD, in GMCS QKD, homodyne detectors are employed to measure electric fields rather than photon energy. By using a strong local oscillator, high efficiency and fast photo diodes can be used to construct the homodyne detector which could result in a high secure key generation rate. However, the performance of the GMCS QKD is strongly depend‐ ent on the channel loss. Recall that in the BB84 QKD system, the channel loss plays a simple The security of QKD has been rigorously proven in a number of recent papers. There has been tremendous interest in experimental QKD [26] [27]. Unfortunately, all those exciting recent experiments are, in principle, insecure due to real-life imperfections. More concretely, highly attenuated lasers are often used as sources. But, these sources sometimes produce signals that contain more than one photon. Those multi-photon signals open the door to powerful new eavesdropping attacks including photon splitting attack. For example, Eve can, in principle, measure the photon number of each signal emitted by Alice and selectively suppress single photon signals. She splits multi-photon signals, keeping one copy for herself and sending one copy to Bob. Now, since Eve has an identical copy of what Bob possesses, the unconditional security of QKD is completely compromised.

In summary, in standard BB84 protocol, only signals originated from single photon pulses emitted by Alice are guaranteed to be secure. Consequently, paraphrasing GLLP (Gottesman, Lo, Lutkenhaus, Preskill [28]), the secure key generation rate (per signal state emitted by Alice) can be shown to be given by:

$$S \ge Q\_{\mu} \{-H\_2(E\_{\mu}) + \Omega \left[1 - H\_2(e\_1)\right] \},\tag{10}$$

where *Qμ* and *Eμ* are respectively the gain and quantum bit error rate (QBER) of the signal state (Here, the gain means the ratio of the number of Bob's detection events (where Bob chooses the same basis as Alice) to Alice's number of emitted signals. QBER means the error rate of Bob's detection events for the case that Alice and Bob use the same basis), *Ω* and *e*1 are respectively the fraction and QBER of detection events by Bob that have originated from singlephoton signals emitted by Alice and *H*<sup>2</sup> is the binary Shannon entropy. It is a prior very hard to obtain a good lower bound on *Ω* and a good upper bound on *e*1. Therefore, prior art methods (as in GLLP [28], under (semi-) realistic assumptions, if imperfections are sufficiently small, then BB84 is unconditionally secure.) make the most pessimistic assumption that all multiphoton signals emitted by Alice will be received by Bob. For this reason, until now, it has been widely believed that the demand for unconditional security will severely reduce the perform‐ ance of QKD systems.

In [29], they present a simple method that will provide very good bounds to *Ω* and *e*1. The method is based on the decoy state idea first proposed by Hwang [12]. While the idea of Hwang was highly innovative, his security analysis was heuristic. Consequently, H.K. Lo etc's method for the first time makes most of the long distance QKD experiments reported in the literature unconditionally secure. And their method has the advantage that it can be implemented with essentially the current hardware. So, unlike prior art solutions based on single-photon sources, their method does not require daunting experimental developments. The key point of the decoy state idea is that Alice prepares a set of additional states — decoy states, in addition to standard BB84 states. Those decoy states are used for the purpose of detecting eavesdropping attacks only, whereas the standard BB84 states are used for key generation only. The only difference between the decoy state and the standard BB84 states is their intensities (i.e., their photon number distributions). By measuring the yields and QBER of decoy states, Alice and Bob can obtain reliable bounds to *Ω* and *e*1, thus allowing them to surpass all prior art results substantially [30].

**b.** Decoy state: Poisson photon number distribution: *μ* ∼2 (at Alice) with mixture 2.

and Bob can catch Eve.

from signal state, then

*Yn*(*signal*)=*Yn*(*decoy*)=*Yn en*(*signal*)=*en*(*decoy*)=*en*.

> m

m m

But there are some drawbacks of Hwang's original idea:

**2.** "Dark counts"–an important effect–are not considered.

**3.** Final results (distance and key generation rate) are unclear.

**1.** Hwang's security analysis was heuristic, rather than rigorous.

If Eve lets an abnormally high fraction of multi-photons go to Bob, then decoy states (which has high weight of multi-photons) will have an abnormally high transmission. Therefore, Alice

Suppose that a decoy state and a signal state have the same characteristics (wavelength, timing information, etc) by H.K. Lo etc's methods [29]. Therefore, Eve cannot distinguish a decoy state from a signal state and the only piece of information available to Eve is the number of photons in a signal. Therefore, the yield, *Yn* (yield of an *n*-photon signal), and QBER, *en* (quantum bit error rate of an *n*-photon signal), can depend on only the photon number,*n*, but not which distribution (decoy or signal) the state is from. If Eve cannot treat the decoy state any differently

Let us imagine that Alice varies over all non-negative values of *μ* randomly and independently for each signal, Alice and Bob can experimentally measure the yield *Qμ* and the QBER *Eμ*.

> 01 2 ( ) ... ( ) .... 2

0 01 12 <sup>2</sup> ( ) ... ( ) .... 2

Since the relations between the variables *Qμ*'s and *Yn*'s and between *Eμ*'s and *en*'s are linear, given the set of variables *Qμ*'s and *Eμ*'s measured from their experiments, Alice and Bob can deduce mathematically with high confidence the variables *Yn*'s and *en*'s. This means that Alice and Bob can constrain simultaneously the yields, *Yn* and QBER *en* simultaneously for all *n*. Suppose Alice and Bob know their channel property well. Then, they know what range of values of *Yn*'s and *en*'s is acceptable. Any attack by Eve that will change the value of any one of the *Yn*'s and *en*'s substantially will, in principle, be caught with high probability by decoy state method. Therefore, in order to avoid being detected, the eavesdropper, Eve, has very limited options in her eavesdropping attack. In summary, the ability for Alice and Bob to verify experimentally the values of *Yn* and *en*'s in the decoy state method greatly strengthens their

*n n*

m

 m

*n n*

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 127

m



 m

2 ! *Q Ye Ye Ye Y en*

2 ! *Q E Ye e Ye e Ye e Ye e n n*

 m m

 m m

mm

mm

m-- -

m-- -

At first, we recall the original decoy state QKD by Hwang [12] in detail.

Define *Yn*= yield = conditional probability that a signal will be detected by Bob, given that it is emitted by Alice as an *n*-photon state.

To design a method to test experimentally the yield (i.e. transmittance) of multi-photons, we can use two-photon states as decoys and test their yield. For example, Alice and Bob estimate the yield *Y*<sup>2</sup> = *x* / *N* if Alice sends N two-photon signals to Bob and Bob detects *x* signals. If Eve selectively sends multi-photons, *Y*2 will be abnormally large. So Eve will be caught.

The two kinds of states are as follows for the decoy state QKD (Toy Model).


The procedure of decoy state QKD (Toy Model) is as following.


If Eve selectively transmits two-photons, an abnormally high fraction of the decoy state B) will be received by Bob. Eve will be caught. But the practical problem with toy model is making perfect two-photon state is hard. So the solution of Hwang's decoy state QKD is to make another mixture of good and bad photons with a different weight.

There is two kinds of states for Hwang's decoy state QKD.

**a.** Signal state: Poisson photon number distribution: *α* (at Alice) with mixture 1.

**b.** Decoy state: Poisson photon number distribution: *μ* ∼2 (at Alice) with mixture 2.

If Eve lets an abnormally high fraction of multi-photons go to Bob, then decoy states (which has high weight of multi-photons) will have an abnormally high transmission. Therefore, Alice and Bob can catch Eve.

But there are some drawbacks of Hwang's original idea:


Suppose that a decoy state and a signal state have the same characteristics (wavelength, timing information, etc) by H.K. Lo etc's methods [29]. Therefore, Eve cannot distinguish a decoy state from a signal state and the only piece of information available to Eve is the number of photons in a signal. Therefore, the yield, *Yn* (yield of an *n*-photon signal), and QBER, *en* (quantum bit error rate of an *n*-photon signal), can depend on only the photon number,*n*, but not which distribution (decoy or signal) the state is from. If Eve cannot treat the decoy state any differently from signal state, then

*Yn*(*signal*)=*Yn*(*decoy*)=*Yn en*(*signal*)=*en*(*decoy*)=*en*.

In [29], they present a simple method that will provide very good bounds to *Ω* and *e*1. The method is based on the decoy state idea first proposed by Hwang [12]. While the idea of Hwang was highly innovative, his security analysis was heuristic. Consequently, H.K. Lo etc's method for the first time makes most of the long distance QKD experiments reported in the literature unconditionally secure. And their method has the advantage that it can be implemented with essentially the current hardware. So, unlike prior art solutions based on single-photon sources, their method does not require daunting experimental developments. The key point of the decoy state idea is that Alice prepares a set of additional states — decoy states, in addition to standard BB84 states. Those decoy states are used for the purpose of detecting eavesdropping attacks only, whereas the standard BB84 states are used for key generation only. The only difference between the decoy state and the standard BB84 states is their intensities (i.e., their photon number distributions). By measuring the yields and QBER of decoy states, Alice and Bob can obtain reliable bounds to *Ω* and *e*1, thus allowing them to surpass all prior art results

Define *Yn*= yield = conditional probability that a signal will be detected by Bob, given that it

To design a method to test experimentally the yield (i.e. transmittance) of multi-photons, we can use two-photon states as decoys and test their yield. For example, Alice and Bob estimate the yield *Y*<sup>2</sup> = *x* / *N* if Alice sends N two-photon signals to Bob and Bob detects *x* signals. If Eve

selectively sends multi-photons, *Y*2 will be abnormally large. So Eve will be caught.

**3.** Alice publicly announces which are signal states and which are decoy states.

**a.** Signal state: Poisson photon number distribution: *α* (at Alice) with mixture 1.

**4.** Alice and Bob compute the transmission probability for the signal states and for the decoy

If Eve selectively transmits two-photons, an abnormally high fraction of the decoy state B) will be received by Bob. Eve will be caught. But the practical problem with toy model is making perfect two-photon state is hard. So the solution of Hwang's decoy state QKD is to make

The two kinds of states are as follows for the decoy state QKD (Toy Model).

**a.** Signal state: Poisson photon number distribution *μ* (at Alice).

The procedure of decoy state QKD (Toy Model) is as following.

**1.** Alice randomly sends either a signal state or decoy state to Bob.

another mixture of good and bad photons with a different weight.

There is two kinds of states for Hwang's decoy state QKD.

At first, we recall the original decoy state QKD by Hwang [12] in detail.

126 Theory and Practice of Cryptography and Network Security Protocols and Technologies

substantially [30].

is emitted by Alice as an *n*-photon state.

**b.** Decoy state: two-photon signals.

**2.** Bob acknowledges receipt of signals.

states respectively.

Let us imagine that Alice varies over all non-negative values of *μ* randomly and independently for each signal, Alice and Bob can experimentally measure the yield *Qμ* and the QBER *Eμ*.

$$\mathcal{Q}\_{\mu} = Y\_0 e^{-\mu} + Y\_1 e^{-\mu} \mu + Y\_2 e^{-\mu} \langle \mu^2 \bigvee \dots + Y\_n e^{-\mu} \langle \mu^{\text{fl}} \bigvee \dots \bigvee \dots \tag{11}$$

$$Q\_{\mu}E\_{\mu} = Y\_0 \varepsilon^{-\mu} e\_0 + Y\_1 \varepsilon^{-\mu} \mu e\_1 + Y\_2 \varepsilon^{-\mu} \langle \mu^2 \big/ 2 \rangle e\_2 + \dots + Y\_n \varepsilon^{-\mu} \langle \mu^{\text{fl}} \big/ \mu\_n^{\text{fl}} \big/ \tag{12}$$

Since the relations between the variables *Qμ*'s and *Yn*'s and between *Eμ*'s and *en*'s are linear, given the set of variables *Qμ*'s and *Eμ*'s measured from their experiments, Alice and Bob can deduce mathematically with high confidence the variables *Yn*'s and *en*'s. This means that Alice and Bob can constrain simultaneously the yields, *Yn* and QBER *en* simultaneously for all *n*. Suppose Alice and Bob know their channel property well. Then, they know what range of values of *Yn*'s and *en*'s is acceptable. Any attack by Eve that will change the value of any one of the *Yn*'s and *en*'s substantially will, in principle, be caught with high probability by decoy state method. Therefore, in order to avoid being detected, the eavesdropper, Eve, has very limited options in her eavesdropping attack. In summary, the ability for Alice and Bob to verify experimentally the values of *Yn* and *en*'s in the decoy state method greatly strengthens their power in detecting eavesdropping, thus leading to a dramatic improvement in the perform‐ ance of their QKD system. The decoy state method allows Alice and Bob to detect deviations from the normal behavior due to eavesdropping attacks.

may even provide a construction for the classical post-processing protocol (for error correction and privacy amplification) that is necessary for the generation of the final key. Without security proofs, a real-life QKD system is incomplete because we can never be sure about how to

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 129

After the qubit exchange and basis reconciliation, Alice and Bob each have a sifted key. Ideally, these keys are identical. But in real life, there are always some errors, and Alice and Bob must apply some classical information processing protocols, like error correction and privacy amplification to their data. The first protocol is necessary to obtain identical keys and the second to obtain a secret key. Essentially, the problem of eavesdropping is to find protocols which, given that Alice and Bob can only measure the QBER, either provide Alice and Bob with a verifiably secure key or stop the protocol and inform the users that the key distribution has failed. This is a delicate problem at the intersection of quantum physics and information theory. Actually, it comprises several eavesdropping problems, depending on the precise protocol, the degree of idealization one admits, the technological power one assumes Eve has, and the assumed fidelity of Alice and Bob's equipment. Let us immediately stress that a

complete analysis of eavesdropping on a quantum channel has yet to be achieved.

In order to simplify the problem, several eavesdropping strategies of limited generality have been defined ([31-33]) and analyzed. Of particular interest is the assumption that Eve attaches independent probes to each qubit and measures her probes one after the other. They can be

**Individual attacks**: In an individual attack, Eve performs an attack on each signal independ‐ ently. The intercept-resend attack is an example of an individual attack. let us consider the simple example of an intercept-resend attack by an eavesdropper Eve, who measures each photon in a randomly chosen basis and then resends the resulting state to Bob. For instance, if Eve performs a rectilinear measurement, photons prepared by Alice in the diagonal bases will be disturbed by Eve's measurement and give random answers. When Eve resends rectilinear photons to Bob, if Bob performs a diagonal measurement, then he will get random answers. Since the two bases are chosen randomly by each party, such an intercept-resend attack will give a bit error rate of 0.5×0.5+0.5×0 = 25%, which is readily detectable by Alice and Bob. Sophisticated attacks against QKD do exist. Fortunately, the security of QKD has now

**Collective attacks**: A more general class of attacks is collective attack where for each signal, Eve independently couples it with an ancillary quantum system, commonly called an ancilla, and evolves the combined signal/ancilla unitarily. She can send the resulting signals to Bob, but keep all ancillas herself. Unlike the case of individual attacks, Eve postpones her choice of measurement. Only after hearing the public discussion between Alice and Bob, does Eve decide on what measurement to perform on her ancilla to extract informa‐

generate a secure key and how secure the final key really is.

**4.1. Eavesdropping attacks**

classified as follows:

been proven.

tion about the final key.

In [29], they also give for the first time a rigorous analysis of the security of decoy state QKD. Moreover, they show that the decoy state idea can be combined with the prior art GLLP analysis. And we can get the comparison results with and without decoy state as the following Figure3.

**Figure 3.** Compare results with and without decoy state.

#### **4. The security of QKD**

Key generate rate

Bennett and Brassard have ever said that the most important question in quantum cryptogra‐ phy is to determine how secure it really is.

Transmission distance [km]

Security proofs are very important because a) they provide the foundation of security to a QKD protocol, b) they provide a formula for the key generation rate of a QKD protocol and c) they may even provide a construction for the classical post-processing protocol (for error correction and privacy amplification) that is necessary for the generation of the final key. Without security proofs, a real-life QKD system is incomplete because we can never be sure about how to generate a secure key and how secure the final key really is.

After the qubit exchange and basis reconciliation, Alice and Bob each have a sifted key. Ideally, these keys are identical. But in real life, there are always some errors, and Alice and Bob must apply some classical information processing protocols, like error correction and privacy amplification to their data. The first protocol is necessary to obtain identical keys and the second to obtain a secret key. Essentially, the problem of eavesdropping is to find protocols which, given that Alice and Bob can only measure the QBER, either provide Alice and Bob with a verifiably secure key or stop the protocol and inform the users that the key distribution has failed. This is a delicate problem at the intersection of quantum physics and information theory. Actually, it comprises several eavesdropping problems, depending on the precise protocol, the degree of idealization one admits, the technological power one assumes Eve has, and the assumed fidelity of Alice and Bob's equipment. Let us immediately stress that a complete analysis of eavesdropping on a quantum channel has yet to be achieved.

#### **4.1. Eavesdropping attacks**

power in detecting eavesdropping, thus leading to a dramatic improvement in the perform‐ ance of their QKD system. The decoy state method allows Alice and Bob to detect deviations

In [29], they also give for the first time a rigorous analysis of the security of decoy state QKD. Moreover, they show that the decoy state idea can be combined with the prior art GLLP analysis. And we can get the comparison results with and without decoy state as the following

The key generation rate as a function of distance

GYS

0 20 40 60 80 100 120 140 160 180 200

Transmission distance [km]

Bennett and Brassard have ever said that the most important question in quantum cryptogra‐

Security proofs are very important because a) they provide the foundation of security to a QKD protocol, b) they provide a formula for the key generation rate of a QKD protocol and c) they

Without Decoy Decoy

from the normal behavior due to eavesdropping attacks.

128 Theory and Practice of Cryptography and Network Security Protocols and Technologies

Figure3.

10-9

**4. The security of QKD**

**Figure 3.** Compare results with and without decoy state.

phy is to determine how secure it really is.

10-8

10-7

10-6

Key generate rate

10-5

10-4

10-3

10-2

In order to simplify the problem, several eavesdropping strategies of limited generality have been defined ([31-33]) and analyzed. Of particular interest is the assumption that Eve attaches independent probes to each qubit and measures her probes one after the other. They can be classified as follows:

**Individual attacks**: In an individual attack, Eve performs an attack on each signal independ‐ ently. The intercept-resend attack is an example of an individual attack. let us consider the simple example of an intercept-resend attack by an eavesdropper Eve, who measures each photon in a randomly chosen basis and then resends the resulting state to Bob. For instance, if Eve performs a rectilinear measurement, photons prepared by Alice in the diagonal bases will be disturbed by Eve's measurement and give random answers. When Eve resends rectilinear photons to Bob, if Bob performs a diagonal measurement, then he will get random answers. Since the two bases are chosen randomly by each party, such an intercept-resend attack will give a bit error rate of 0.5×0.5+0.5×0 = 25%, which is readily detectable by Alice and Bob. Sophisticated attacks against QKD do exist. Fortunately, the security of QKD has now been proven.

**Collective attacks**: A more general class of attacks is collective attack where for each signal, Eve independently couples it with an ancillary quantum system, commonly called an ancilla, and evolves the combined signal/ancilla unitarily. She can send the resulting signals to Bob, but keep all ancillas herself. Unlike the case of individual attacks, Eve postpones her choice of measurement. Only after hearing the public discussion between Alice and Bob, does Eve decide on what measurement to perform on her ancilla to extract informa‐ tion about the final key.

**Joint attacks**: The most general class of attacks is joint attack. In a joint attack, instead of interacting with each signal independently, Eve treats all the signals as a single quantum system. She then couples the signal system with her ancilla and evolves the combined signal and ancilla system unitarily. She hears the public discussion between Alice and Bob before deciding on which measurement to perform on her ancilla.

**Assumption 2: Authentication is secure.** This assumption is one of the main concerns of those evaluating quantum key distributions. In order to be protected against man-in-the-middle attack, much of the classical communication in QKD must be authenticated. Authentication can be achieved with unconditional security using short shared keys, or with computational

**Assumption 3: Our devices are secure.** Constructing a QKD implementation that is verifiably secure is a substantial engineering challenge that researchers are still working on. Although the first prototype QKD system leaked key information over a side channel (it made different noises depending on the photon polarization, and thus the "prototype was unconditionally secure against any eavesdropper who happened to be deaf" [36] ), experimental cryptanalysis leads to better theoretical and practical security. More sophisticated side-channel attacks continue to be proposed against particular implementations of existing systems (e.g., [37]), but so too are better theoretical methods being proposed, such as the decoy state method [38]. Device-independent security proofs [39, 40] aim to minimize the security assumptions on physical devices. It seems reasonable to expect that further theoretical and engineering advances will eventually bring us devices which have strong arguments and few assumptions

Proving the security of QKD against the most general attack was a very hard problem. It took more than 10 years, but the unconditional security of QKD was finally established in several papers in the 1990s. One approach by Mayers [16] was to prove the security of the BB84 directly. A simpler approach by Lo and Chau [17], mad use of the idea of entanglement distillation by Bennett, DiVincenzo, Smolin and Wootters (BDSW) [41] and quantum privacy amplification by Deutsch et al. [42] to solve the security of an entanglement-based QKD protocol. The two approaches have been unified by the work of Shor and Preskill [43], who provided a simple proof of security of BB84 using entanglement distillation idea. Other early security proofs of

QKD include Biham, Boyer, Boykin, Mor, and Roychowdhury [44], and Ben-Or [45].

Entanglement distillation protocol (EDP) provides a simple approach to security proof [17, 42, 43]. The basic insight is that entanglement is a sufficient (but not necessary) condi‐ tion for a secure key. In the noiseless case, suppose two distant parties, Alice and Bob,

and Bob measure their systems, then they will both get "0"s or "1"s, which is a shared random key. Moreover, if we consider the combined system of the three parties—Alice, Bob and an eavesdropper, Eve, we can use a pure-state description (the "Church of Larger Hilbert space") and consider a pure state |*ψ ABE*. In this case, the von Neumann entro‐ py of Eve *S*(*ρE*)=*S*(*ρAB*)=0. This means that Eve has absolutely no information on the final

2

(|00 *AB* + |11 *AB*). If each of Alice

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 131

There are several approaches to security proof as following. [5]

share a maximally entangled state of the form |*ϕ AB* =<sup>1</sup>

security using public key cryptography.

for their security.

**4.3. Security proofs for QKD**

*4.3.1. Entanglement distillation*

For joint and collective attacks, the usual assumption is that Eve measures her probe only after Alice and Bob have completed all public discussion about basis reconciliation, error correction, and privacy amplification. For the more realistic individual attacks, one assumes that Eve waits only until the basis reconciliation phase of the public discussion. With today's technology, it might even be fair to assume that in individual attacks Eve must measure her probe before the basis reconciliation [34]. The motivation for this assumption is that one hardly sees what Eve could gain by waiting until after the public discussion on error correction and privacy amplification before measuring her probes, since she is going to measure them independently anyway. About practical QKD, they summary some assumptions about security of QKD in [18]. We describe them in the next subsection 4.2.

#### **4.2. Some assumptions about security of QKD**

Quantum key distribution is often described by its proponents as "unconditionally secure" to emphasize its difference with computationally secure classical cryptographic protocols. While there are still conditions that need to be satisfied for quantum key distribution to be secure, the phrase "unconditionally secure" is justified because, not only are the conditions reduced, they are in some sense minimal necessary conditions. Any secure key agreement protocol must make a few minimal assumptions, for security cannot come from nothing: we must be able to identify and authenticate the communicating parties, we must be able to have some private location to perform local operations, and all parties must operate within the laws of physics.

The following statement describes the security of quantum key distribution, and there are many formal mathematical arguments for the security of QKD.

**Theorem 1 (Security statement for quantum key distribution)** If 1) quantum mechanics is correct, and 2) authentication is secure, and 3) our devices are reasonably secure, then with high probability the key established by quantum key distribution is a random secret key independent (up to a negligible difference) of input values.

**Assumption 1: Quantum mechanics is correct.** This assumption requires that any eavesdrop‐ per be bounded by the laws of quantum mechanics, although within this realm there are no further restrictions beyond the eavesdropper's inability to access the devices. In particular, we allow the eavesdropper to have arbitrarily large quantum computing technology, far more powerful than the current state of the art. Quantum mechanics has been tested experimentally for nearly a century, to very high precision. But even if quantum mechanics is superseded by a new physical theory, it is not necessarily true that quantum key distribution would be insecure: for example, secure key distribution can be achieved in a manner similar to QKD solely based on the assumption that no faster-than-light communication is possible [35].

**Assumption 2: Authentication is secure.** This assumption is one of the main concerns of those evaluating quantum key distributions. In order to be protected against man-in-the-middle attack, much of the classical communication in QKD must be authenticated. Authentication can be achieved with unconditional security using short shared keys, or with computational security using public key cryptography.

**Assumption 3: Our devices are secure.** Constructing a QKD implementation that is verifiably secure is a substantial engineering challenge that researchers are still working on. Although the first prototype QKD system leaked key information over a side channel (it made different noises depending on the photon polarization, and thus the "prototype was unconditionally secure against any eavesdropper who happened to be deaf" [36] ), experimental cryptanalysis leads to better theoretical and practical security. More sophisticated side-channel attacks continue to be proposed against particular implementations of existing systems (e.g., [37]), but so too are better theoretical methods being proposed, such as the decoy state method [38]. Device-independent security proofs [39, 40] aim to minimize the security assumptions on physical devices. It seems reasonable to expect that further theoretical and engineering advances will eventually bring us devices which have strong arguments and few assumptions for their security.

#### **4.3. Security proofs for QKD**

**Joint attacks**: The most general class of attacks is joint attack. In a joint attack, instead of interacting with each signal independently, Eve treats all the signals as a single quantum system. She then couples the signal system with her ancilla and evolves the combined signal and ancilla system unitarily. She hears the public discussion between Alice and Bob before

For joint and collective attacks, the usual assumption is that Eve measures her probe only after Alice and Bob have completed all public discussion about basis reconciliation, error correction, and privacy amplification. For the more realistic individual attacks, one assumes that Eve waits only until the basis reconciliation phase of the public discussion. With today's technology, it might even be fair to assume that in individual attacks Eve must measure her probe before the basis reconciliation [34]. The motivation for this assumption is that one hardly sees what Eve could gain by waiting until after the public discussion on error correction and privacy amplification before measuring her probes, since she is going to measure them independently anyway. About practical QKD, they summary some assumptions about security of QKD in

Quantum key distribution is often described by its proponents as "unconditionally secure" to emphasize its difference with computationally secure classical cryptographic protocols. While there are still conditions that need to be satisfied for quantum key distribution to be secure, the phrase "unconditionally secure" is justified because, not only are the conditions reduced, they are in some sense minimal necessary conditions. Any secure key agreement protocol must make a few minimal assumptions, for security cannot come from nothing: we must be able to identify and authenticate the communicating parties, we must be able to have some private location to perform local operations, and all parties must operate within the laws of physics.

The following statement describes the security of quantum key distribution, and there are

**Theorem 1 (Security statement for quantum key distribution)** If 1) quantum mechanics is correct, and 2) authentication is secure, and 3) our devices are reasonably secure, then with high probability the key established by quantum key distribution is a random secret key

**Assumption 1: Quantum mechanics is correct.** This assumption requires that any eavesdrop‐ per be bounded by the laws of quantum mechanics, although within this realm there are no further restrictions beyond the eavesdropper's inability to access the devices. In particular, we allow the eavesdropper to have arbitrarily large quantum computing technology, far more powerful than the current state of the art. Quantum mechanics has been tested experimentally for nearly a century, to very high precision. But even if quantum mechanics is superseded by a new physical theory, it is not necessarily true that quantum key distribution would be insecure: for example, secure key distribution can be achieved in a manner similar to QKD solely based on the assumption that no faster-than-light communication is possible [35].

deciding on which measurement to perform on her ancilla.

130 Theory and Practice of Cryptography and Network Security Protocols and Technologies

[18]. We describe them in the next subsection 4.2.

**4.2. Some assumptions about security of QKD**

many formal mathematical arguments for the security of QKD.

independent (up to a negligible difference) of input values.

Proving the security of QKD against the most general attack was a very hard problem. It took more than 10 years, but the unconditional security of QKD was finally established in several papers in the 1990s. One approach by Mayers [16] was to prove the security of the BB84 directly. A simpler approach by Lo and Chau [17], mad use of the idea of entanglement distillation by Bennett, DiVincenzo, Smolin and Wootters (BDSW) [41] and quantum privacy amplification by Deutsch et al. [42] to solve the security of an entanglement-based QKD protocol. The two approaches have been unified by the work of Shor and Preskill [43], who provided a simple proof of security of BB84 using entanglement distillation idea. Other early security proofs of QKD include Biham, Boyer, Boykin, Mor, and Roychowdhury [44], and Ben-Or [45].

There are several approaches to security proof as following. [5]

#### *4.3.1. Entanglement distillation*

Entanglement distillation protocol (EDP) provides a simple approach to security proof [17, 42, 43]. The basic insight is that entanglement is a sufficient (but not necessary) condi‐ tion for a secure key. In the noiseless case, suppose two distant parties, Alice and Bob, share a maximally entangled state of the form |*ϕ AB* =<sup>1</sup> 2 (|00 *AB* + |11 *AB*). If each of Alice and Bob measure their systems, then they will both get "0"s or "1"s, which is a shared random key. Moreover, if we consider the combined system of the three parties—Alice, Bob and an eavesdropper, Eve, we can use a pure-state description (the "Church of Larger Hilbert space") and consider a pure state |*ψ ABE*. In this case, the von Neumann entro‐ py of Eve *S*(*ρE*)=*S*(*ρAB*)=0. This means that Eve has absolutely no information on the final key. In the noisy case, Alice and Bob may share *N* pairs of qubits, which are a noisy version of *N* maximally entangled states. Now, using the idea of entanglement distilla‐ tion protocol (EDP) discussed in BDSW [41], Alice and Bob may apply local operations and classical communications (LOCCs) to distill from the *N* noisy pairs a smaller number, say *M* almost perfect pairs i.e., a state close to |*ϕ AB <sup>M</sup>* . Once such a EDP has been performed, Alice and Bob can measure their respective system to generate an *M* -bit final key.

Where |*ψ<sup>d</sup>* <sup>=</sup>∑*<sup>i</sup>*=1

*<sup>d</sup>* <sup>|</sup>*ii* and *ρ<sup>A</sup>* ′

controlled in the computational basis

useful to consider general twisting later).

*4.3.4. Complementary principle*

*Z*-basis, gives a random answer.

*4.3.5. Other ideas for security proofs*

*<sup>B</sup>* ′ is an arbitrary state on *A*′

*AB ij*

The main new ingredient of the above theorem is the introduction of a "shield" part to Alice and Bob's system. That is, in addition to the systems *A* and *B* used by Alice and Bob for key

called the shield part. Since we assume that Eve has no access to the shield part, Eve is further limited in her ability to eavesdrop. Therefore, Alice and Bob can derive a higher key generation

Another approach to security proof is to use the complementary principle of quantum mechanics. Such an approach is interesting because it shows the deep connection between the foundations of quantum mechanics and the security of QKD. In fact, both Mayers' proof [16] and Biham, Boyer, Boykin, Mor, and Roychowdhury's proof [44] make use of this comple‐ mentary principle. A clear and rigorous discussion of the complementary principle approach to security proof has recently been achieved by Koashi [51]. The key insight of Koashi's proof is that Alice and Bob's ability to generate a random secure key in the *Z*-basis (by a measurement of the Pauli spin matrix *σ<sup>Z</sup>* ) is equivalent to the ability for Bob to help Alice prepare an eigenstate in the complementary, i.e., *X*-basis (*σ<sup>X</sup>* ), with their help of the shield. The intuition

2

Here are two other ideas for security proofs, namely, a) device-independent security proofs and b) security from the causality constraint. Unfortunately, these ideas are still very much under development and so far a complete version of a proof of unconditional security of QKD

Let us start with a) device-independent security proofs. So far we have assumed that Alice and Bob know what their devices are doing exactly. In practice, Alice and Bob may not know their devices for sure. Recently, there has been much interest in the idea of device independent security proofs. In other words, how to prove security when Alice and Bob's devices cannot

*U ij ij U* ¢ ¢

.

*A B*

2

*m*

, 1

=

generation, we assume that Alice and Bob also hold some ancillary systems, *A*′

*i j*

The operation (15) will be called "twisting" (note that only *Uii*

rate than the case when Eve does have access to the shield part.

is that an *X*-basis eigenstate, for example, | + *<sup>A</sup>* =<sup>1</sup>

based on these ideas with a finite key rate is still missing.

, *B* ′

<sup>=</sup> å <sup>Ä</sup> (15)

*A* ′ *B* ′

. *U* is an arbitrary unitary

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092

matter here, yet it will be

(|0 *<sup>A</sup>* + |1 *<sup>A</sup>*), when measured along the

and *B* ′

, often

133

How can Alice and Bob be sure that their EDP will be successful? Whether an EDP will be successful or not depends on the initial state shared by Alice and Bob. In practice, Alice and Bob can never be sure what initial state they possess. Therefore, it is useful for them to add a verification step. By, for example, randomly testing a fraction of their pairs, they have a pretty good idea about the properties (e.g., the bit-flip and phase error rates) of their remaining pairs and are pretty confident that their EDP will be successful.

#### *4.3.2. Communication complexity/quantum memory*

The communication complexity/quantum memory approach to security proof was proposed by Ben-Or [45] and subsequently by Renner and Koenig [46]. See also [47]. They provide a formula for secure key generation rate in terms of an eavesdropper's quantum knowledge on the raw key: Let *Z* be a random variable with range ℤ, let *ρ* be a random state, and let *F* be a two-universal function on ℤ with range *S* ={0, 1}*<sup>s</sup>* which is independent of *Z* and *ρ*. Then [46]

$$d(F(Z) \mid \{F\} \otimes \rho) \le \frac{1}{2} 2^{-\frac{1}{2}(\mathcal{S}\_2(\{ (Z) \otimes \rho \}) - \mathcal{S}\_0(\{ \rho \}) - s)}.\tag{13}$$

Incidentally, the quantum de Finnetti's theorem [48] is often useful for simplifying security proofs of this type.

#### *4.3.3. Twisted state approach*

What is a necessary and sufficient condition for secure key generation? From the entanglement distillation approach, we know that entanglement distillation a sufficient condition for secure key generation. For some time, it was hoped that entanglement distillation is also a necessary condition for secure key generation. However, such an idea was proven to be wrong in [49] [50], where it was found that a necessary and sufficient condition is the distillation of a private state, rather than a maximally entangled state. A private state is a "twisted" version of a maximally entangled state. They proved the following theorem in [49]: a state is private in the above sense iff it is of the following form

$$\mathcal{I}\left.\gamma\_{m} = \mathcal{U}\left|\boldsymbol{\nu}\_{2^{m}}^{+}\right\rangle\_{AB} \left\langle \boldsymbol{\nu}\_{2^{m}}^{+} \right| \otimes \boldsymbol{\rho}\_{A'B} \mathcal{U}^{\dagger} \tag{14}$$

Where |*ψ<sup>d</sup>* <sup>=</sup>∑*<sup>i</sup>*=1 *<sup>d</sup>* <sup>|</sup>*ii* and *ρ<sup>A</sup>* ′ *<sup>B</sup>* ′ is an arbitrary state on *A*′ , *B* ′ . *U* is an arbitrary unitary controlled in the computational basis

$$\mathcal{U}\mathcal{U} = \sum\_{i,j=1}^{2^{m}} \left| \dot{\imath} \right\rangle\_{AB} \left\langle \dot{\imath} \right| \otimes \mathcal{U}\_{ij}^{A'B'}.\tag{15}$$

The operation (15) will be called "twisting" (note that only *Uii A* ′ *B* ′ matter here, yet it will be useful to consider general twisting later).

The main new ingredient of the above theorem is the introduction of a "shield" part to Alice and Bob's system. That is, in addition to the systems *A* and *B* used by Alice and Bob for key generation, we assume that Alice and Bob also hold some ancillary systems, *A*′ and *B* ′ , often called the shield part. Since we assume that Eve has no access to the shield part, Eve is further limited in her ability to eavesdrop. Therefore, Alice and Bob can derive a higher key generation rate than the case when Eve does have access to the shield part.

#### *4.3.4. Complementary principle*

key. In the noisy case, Alice and Bob may share *N* pairs of qubits, which are a noisy version of *N* maximally entangled states. Now, using the idea of entanglement distilla‐ tion protocol (EDP) discussed in BDSW [41], Alice and Bob may apply local operations and classical communications (LOCCs) to distill from the *N* noisy pairs a smaller number,

How can Alice and Bob be sure that their EDP will be successful? Whether an EDP will be successful or not depends on the initial state shared by Alice and Bob. In practice, Alice and Bob can never be sure what initial state they possess. Therefore, it is useful for them to add a verification step. By, for example, randomly testing a fraction of their pairs, they have a pretty good idea about the properties (e.g., the bit-flip and phase error rates) of their remaining pairs

The communication complexity/quantum memory approach to security proof was proposed by Ben-Or [45] and subsequently by Renner and Koenig [46]. See also [47]. They provide a formula for secure key generation rate in terms of an eavesdropper's quantum knowledge on the raw key: Let *Z* be a random variable with range ℤ, let *ρ* be a random state, and let *F* be a two-universal function on ℤ with range *S* ={0, 1}*<sup>s</sup>* which is independent of *Z* and *ρ*. Then [46]

1

Incidentally, the quantum de Finnetti's theorem [48] is often useful for simplifying security

What is a necessary and sufficient condition for secure key generation? From the entanglement distillation approach, we know that entanglement distillation a sufficient condition for secure key generation. For some time, it was hoped that entanglement distillation is also a necessary condition for secure key generation. However, such an idea was proven to be wrong in [49] [50], where it was found that a necessary and sufficient condition is the distillation of a private state, rather than a maximally entangled state. A private state is a "twisted" version of a maximally entangled state. They proved the following theorem in [49]: a state is private in the

2 2 *<sup>m</sup> m m A B AB*

 yr*U U* + +

 y

<sup>2</sup> <sup>1</sup> ( ( )|{ } ) 2 . <sup>2</sup>

r

2 0


r

( ([{ } ]) ([ ]) )

 r

†

¢ ¢ = Ä (14)

Ä £ (13)

*SZ S s*

Alice and Bob can measure their respective system to generate an *M* -bit final key.

*<sup>M</sup>* . Once such a EDP has been performed,

say *M* almost perfect pairs i.e., a state close to |*ϕ AB*

132 Theory and Practice of Cryptography and Network Security Protocols and Technologies

and are pretty confident that their EDP will be successful.

*dFZ F*

proofs of this type.

*4.3.3. Twisted state approach*

above sense iff it is of the following form

g

*4.3.2. Communication complexity/quantum memory*

Another approach to security proof is to use the complementary principle of quantum mechanics. Such an approach is interesting because it shows the deep connection between the foundations of quantum mechanics and the security of QKD. In fact, both Mayers' proof [16] and Biham, Boyer, Boykin, Mor, and Roychowdhury's proof [44] make use of this comple‐ mentary principle. A clear and rigorous discussion of the complementary principle approach to security proof has recently been achieved by Koashi [51]. The key insight of Koashi's proof is that Alice and Bob's ability to generate a random secure key in the *Z*-basis (by a measurement of the Pauli spin matrix *σ<sup>Z</sup>* ) is equivalent to the ability for Bob to help Alice prepare an eigenstate in the complementary, i.e., *X*-basis (*σ<sup>X</sup>* ), with their help of the shield. The intuition is that an *X*-basis eigenstate, for example, | + *<sup>A</sup>* =<sup>1</sup> 2 (|0 *<sup>A</sup>* + |1 *<sup>A</sup>*), when measured along the *Z*-basis, gives a random answer.

#### *4.3.5. Other ideas for security proofs*

Here are two other ideas for security proofs, namely, a) device-independent security proofs and b) security from the causality constraint. Unfortunately, these ideas are still very much under development and so far a complete version of a proof of unconditional security of QKD based on these ideas with a finite key rate is still missing.

Let us start with a) device-independent security proofs. So far we have assumed that Alice and Bob know what their devices are doing exactly. In practice, Alice and Bob may not know their devices for sure. Recently, there has been much interest in the idea of device independent security proofs. In other words, how to prove security when Alice and Bob's devices cannot be trusted. See, for example, [52]. The idea is to look only at the input and output variables. A handwaving argument goes as follows. Using their probability distribution, if one can demonstrate the violation of some Bell inequalities, then one cannot explain the data by a separable system. How to develop such a handwaving argument into a full proof of uncon‐ ditional security is an important question.

tion.QSS of classical messages can be divided into QSS of classical messages based on

In 1999, Hillery et al. [55] used entangled three-photon GHZ states to propose the first QSS protocol, namely the HBB99 scheme. In their scheme, the dealer (Alice) prepares a three

B and C to Bob and Charlie, respectively. The three parties all choose randomly one of two measuring bases to measure the photons in their hands independently. They keep the correlate results for generating the key *KA*. In the same year, Cleve et al. utilized the properties of quantum error-correcting code to propose the first (*k*, *n*) threshold of QSS protocol. In a (*k*, *n*) threshold scheme, any subset of *k* or more parties can reconstruct the secret, while any subset of *k* −1 or fewer parties can obtain no information [56]. In 2001, Tittel et al. used the experiment to realize quantum secret sharing for the first time [54]. In 2002, Tyc et al. developed the theory of continuous variable quantum secret sharing and propose its interferometric realization using passive and active optical elements [57]. In 2003, Gou et al. presented a quantum secret sharing scheme where only product states are employed [58]. Xiao et al. showed that in the Hillery-Bužek-Berthiaume QSS scheme [59], and the secret information is shared in the parity of binary strings formed by the measured outcomes of the participants in 2004. With the rapid

development of QSS, people are researching to achieve unconditional security.

sender. This protocol ensures the validity and security of the shared information.

We can see an example of QSS based on entanglement state GHZ [55].

Quantum entanglement is an indispensable physical resource in QSS. Many application fields of QSS such as this entanglement feature, so the study of entanglement is the core issue of

Let's see the QSS based on entanglement. The entanglement states are all generated by the sender, and the order of two or more photons sent to the same agent is randomly changed. After the photons send to the receiver, for the detection mode, the order of the two photons is announced, so that the two parties detected the security of the quantum channel, for the information mode, the two receivers respectively does Bell measurement on the two photons they owned, and then communicate through classical channel to share the secret key with the

Let us suppose that Alice, Bob, and Charlie each have one particle from a GHZ triplet that is

particle in the x or y direction. They then announce publicly in which direction they have made a measurement, but not the results of their measurements. Half the time, Bob and Charlie, by combining the results of their measurements, can determine what the result of Alice's meas‐ urement was. This allows Alice to establish a joint key with Bob and Charlie, which she can then use to send her message. Let us see how this works in more detail. Define the *x* and *y*

(|000 + |111 ). They each choose at random whether to measure their

2

(|000 + |111 )*ABC* and sends the photon

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 135

entanglement and QSS of classical messages without entanglement.

photons quantum system in the GHZ state |*ψ* =<sup>1</sup>

**5.1. QSS based on entanglement states**

2

quantum information theory.

in the state |*ψ* =<sup>1</sup>

eigenstates

The second idea b) security from the causality constraint is even more ambitious. The question that it tries to address is the following. How can one prove security when even quantum mechanics is wrong? In [53] and references cited therein, it was suggested that perhaps a more general physical principle such as the no-signaling requirement for space-like observables could be used to prove the security of QKD.

#### **5. Quantum secret sharing**

"Secret sharing" refers to an important family of multi-party cryptographic protocols in both the classical and the quantum contexts. A secret sharing protocol comprises a dealer and *n* players who are interconnected by some set of classical or quantum channels. The "secret" to be shared is a classical string or quantum state and is distributed among the players by the dealer in such a way that it can only be recovered by certain subsets of players acting collab‐ oratively. The access structure is the set of all subsets of players who can recover the secret, and the adversary structure corresponds to those subsets that obtain no knowledge of the secret. There may, in addition, be external eavesdroppers who should also gain no knowledge of the secret.

Quantum secret sharing (abbreviated QSS) is the generalization of quantum key distribution to more than two parties [54]. In this new application of quantum communication, Alice distributes a secret key to two other users, Bob and Charlie, in such a way that neither Bob nor Charlie alone has any information about the key, but together they have full information. As in traditional QC, an eavesdropper trying to get some information about the key creates errors in the transmission data and thus reveals her presence. The motivation behind quantum secret sharing is to guarantee that Bob and Charlie cooperate—one of them might be dishonest—in order to obtain a given piece of information. In contrast with previous proposals using three particle Greenberger-Horne-Zeilinger states [55], pairs of entangled photons in so-called energy-time Bell states were used to mimic the necessary quantum correlation of three entangled qubits, although only two photons exist at the same time. This is possible because of the symmetry between the preparation device acting on the pump pulse and the devices analyzing the downconverted photons. Therefore the emission of a pump pulse can be considered as the detection of a photon with 100% efficiency, and the scheme features a much higher coincidence rate than that expected with the initially proposed "triplephoton" schemes.

QSS which is based on the laws of quantum mechanics, instead of mathematical assumptions can share the information unconditionally securely. According to the form of sharing infor‐ mation, QSS can be divided into QSS of classical messages and QSS of quantum informa‐ tion.QSS of classical messages can be divided into QSS of classical messages based on entanglement and QSS of classical messages without entanglement.

In 1999, Hillery et al. [55] used entangled three-photon GHZ states to propose the first QSS protocol, namely the HBB99 scheme. In their scheme, the dealer (Alice) prepares a three photons quantum system in the GHZ state |*ψ* =<sup>1</sup> 2 (|000 + |111 )*ABC* and sends the photon B and C to Bob and Charlie, respectively. The three parties all choose randomly one of two measuring bases to measure the photons in their hands independently. They keep the correlate results for generating the key *KA*. In the same year, Cleve et al. utilized the properties of quantum error-correcting code to propose the first (*k*, *n*) threshold of QSS protocol. In a (*k*, *n*) threshold scheme, any subset of *k* or more parties can reconstruct the secret, while any subset of *k* −1 or fewer parties can obtain no information [56]. In 2001, Tittel et al. used the experiment to realize quantum secret sharing for the first time [54]. In 2002, Tyc et al. developed the theory of continuous variable quantum secret sharing and propose its interferometric realization using passive and active optical elements [57]. In 2003, Gou et al. presented a quantum secret sharing scheme where only product states are employed [58]. Xiao et al. showed that in the Hillery-Bužek-Berthiaume QSS scheme [59], and the secret information is shared in the parity of binary strings formed by the measured outcomes of the participants in 2004. With the rapid development of QSS, people are researching to achieve unconditional security.

#### **5.1. QSS based on entanglement states**

be trusted. See, for example, [52]. The idea is to look only at the input and output variables. A handwaving argument goes as follows. Using their probability distribution, if one can demonstrate the violation of some Bell inequalities, then one cannot explain the data by a separable system. How to develop such a handwaving argument into a full proof of uncon‐

The second idea b) security from the causality constraint is even more ambitious. The question that it tries to address is the following. How can one prove security when even quantum mechanics is wrong? In [53] and references cited therein, it was suggested that perhaps a more general physical principle such as the no-signaling requirement for space-like observables

"Secret sharing" refers to an important family of multi-party cryptographic protocols in both the classical and the quantum contexts. A secret sharing protocol comprises a dealer and *n* players who are interconnected by some set of classical or quantum channels. The "secret" to be shared is a classical string or quantum state and is distributed among the players by the dealer in such a way that it can only be recovered by certain subsets of players acting collab‐ oratively. The access structure is the set of all subsets of players who can recover the secret, and the adversary structure corresponds to those subsets that obtain no knowledge of the secret. There may, in addition, be external eavesdroppers who should also gain no knowledge

Quantum secret sharing (abbreviated QSS) is the generalization of quantum key distribution to more than two parties [54]. In this new application of quantum communication, Alice distributes a secret key to two other users, Bob and Charlie, in such a way that neither Bob nor Charlie alone has any information about the key, but together they have full information. As in traditional QC, an eavesdropper trying to get some information about the key creates errors in the transmission data and thus reveals her presence. The motivation behind quantum secret sharing is to guarantee that Bob and Charlie cooperate—one of them might be dishonest—in order to obtain a given piece of information. In contrast with previous proposals using three particle Greenberger-Horne-Zeilinger states [55], pairs of entangled photons in so-called energy-time Bell states were used to mimic the necessary quantum correlation of three entangled qubits, although only two photons exist at the same time. This is possible because of the symmetry between the preparation device acting on the pump pulse and the devices analyzing the downconverted photons. Therefore the emission of a pump pulse can be considered as the detection of a photon with 100% efficiency, and the scheme features a much higher coincidence rate than that expected with the initially proposed "triplephoton" schemes.

QSS which is based on the laws of quantum mechanics, instead of mathematical assumptions can share the information unconditionally securely. According to the form of sharing infor‐ mation, QSS can be divided into QSS of classical messages and QSS of quantum informa‐

ditional security is an important question.

134 Theory and Practice of Cryptography and Network Security Protocols and Technologies

could be used to prove the security of QKD.

**5. Quantum secret sharing**

of the secret.

Quantum entanglement is an indispensable physical resource in QSS. Many application fields of QSS such as this entanglement feature, so the study of entanglement is the core issue of quantum information theory.

Let's see the QSS based on entanglement. The entanglement states are all generated by the sender, and the order of two or more photons sent to the same agent is randomly changed. After the photons send to the receiver, for the detection mode, the order of the two photons is announced, so that the two parties detected the security of the quantum channel, for the information mode, the two receivers respectively does Bell measurement on the two photons they owned, and then communicate through classical channel to share the secret key with the sender. This protocol ensures the validity and security of the shared information.

We can see an example of QSS based on entanglement state GHZ [55].

Let us suppose that Alice, Bob, and Charlie each have one particle from a GHZ triplet that is in the state |*ψ* =<sup>1</sup> 2 (|000 + |111 ). They each choose at random whether to measure their particle in the x or y direction. They then announce publicly in which direction they have made a measurement, but not the results of their measurements. Half the time, Bob and Charlie, by combining the results of their measurements, can determine what the result of Alice's meas‐ urement was. This allows Alice to establish a joint key with Bob and Charlie, which she can then use to send her message. Let us see how this works in more detail. Define the *x* and *y* eigenstates

$$\begin{aligned} \left| +x \right\rangle &= \bigvee\_{\sqrt{2}} (\left| 0 \right\rangle + \left| 1 \right\rangle), & \left| +y \right\rangle &= \bigvee\_{\sqrt{2}} (\left| 0 \right\rangle + i \left| 1 \right\rangle), \\ \left| -x \right\rangle &= \bigvee\_{\sqrt{2}} (\left| 0 \right\rangle - \left| 1 \right\rangle), & \left| -y \right\rangle &= \bigvee\_{\sqrt{2}} (\left| 0 \right\rangle - i \left| 1 \right\rangle). \end{aligned} \tag{16}$$

are. Similarly, Bob will not be able to determine what Alice's result is without Charlie's assistance because he does not know if his result is the same as Alice's or the opposite of hers.

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 137

To improve the efficiency of QSS, a protocol share the message directly among the users was proposed. The scheme made full use of entanglement swapping of Bell states and local operations. For detection of eavesdropping, the EPR pairs were divided into two parts: the checking parts and the encoding parts. After insuring the security of the quantum channel by measuring the checking particles in conjugate bases, the sender encoded her bits via the local unitary operations on the encoding parts. And the protocol is secure, and two Bell states can be used to share two bits message. And there is a scheme for multiparty quantum secret sharing which is based on EPR entangled state. In the scheme, the secret messages are imposed on the auxiliary particles, and the transmitted particles of EPR pairs do not carry any secret messages during the whole process of transmission. After both of the communicators reliably share the EPR entangled states, all the participants can securely share the secret messages of the sender. Because there is no particles that carrying the secret message being transmitted on the quantum channel during the process of transmission, the scheme can efficiently resist the eavesdropper's

So, entanglement makes an important role in quantum secret sharing and many application fields of quantum information theory such as quantum teleportation, QKD, quantum com‐ puting need to use this entanglement feature. But the quantification of the entanglement receives a better solution only for bipartite quantum system, and the quantification of multi‐ partite entanglement is still open even for a pure multipartite state. Until now, a variety of different entanglement measures have been proposed for multipartite setting, such as the robustness of entanglement, the relative entropy of entanglement, and the geometric measure.

However, all these methods involve variable complexity problem, which make the quantifi‐ cation of multipartite entanglement very difficult. Fortunately, it is hopeful to obtain the exact value of the multipartite entanglement of graph states, which are very useful multipartite quantum states in quantum information processing. Graph states are the specific algorithm resources for one-way quantum computing model, and they are subsets of stabilizer states

The quantification of entanglement has attracted wide attention in recent years, but the quantification of the entanglement receives a better solution only for bipartite quantum system. And the quantification of multipartite entanglement is still open even for a pure multipartite state. Until now, a variety of different entanglement measures have been proposed for multipartite setting, such as the robustness of entanglement, the relative entropy of entangle‐ ment, and the quantification of multipartite entanglement is still open even for a pure multi‐ partite state. Fortunately, it is hopeful to obtain the exact value of the multipartite entanglement of graph states, which are useful multipartite quantum states in quantum information

attack on secret message.

which are widely used in quantum error correction.

**5.2. QSS with qudit graph states**

processing.

We can see the effects of measurements by Alice and Bob on the state of Charlie's particle if we express the GHZ state in different ways. Noting that

$$\begin{array}{ll} \lfloor 0 \rfloor = \bigvee\_{\Delta} (\lfloor +x \rfloor + \lfloor -x \rfloor), & \lfloor 1 \rfloor = \bigvee\_{\Delta} (\lfloor +x \rfloor - \lfloor -x \rfloor). \end{array} \tag{17}$$

we can write

$$\begin{aligned} \left| \boldsymbol{\nu} \right> &= \bigvee\_{\mathbf{2}} \bigvee \left( \left| + \mathbf{x} \right>\_{a} \big| + \mathbf{x} \right)\_{b} + \left| - \mathbf{x} \right>\_{a} \big| \left| - \mathbf{x} \right>\_{b} (\left| 0 \right>\_{c} + \left| 1 \right>\_{c}) \\ &+ (\left| + \mathbf{x} \right>\_{a} \big| - \mathbf{x} \big| \_{b} + \left| - \mathbf{x} \right>\_{a} (+\mathbf{x} \big| \_{b}) (\left| 0 \right>\_{c} - \left| 1 \right>\_{c}) \big| . \end{aligned} \tag{18}$$

This decomposition of |*ψ* tells us what happens if both Alice and Bob make measurements in the *x* direction. If they both get the same result, then Charlie will have the state 1 2 (|0 *<sup>c</sup>* + |1 *<sup>c</sup>*); if they get different results, he will have the state <sup>1</sup> 2 (|0 *<sup>c</sup>* − |1 *<sup>c</sup>*). He can determine which of these states he has by performing a measurement along the *x* direction. The following table summarizes the effects of Alice's and Bob's measurements on Charlie's state:


**Table 2.** QSS based on entanglement state [55].

Alice's measurements are given in the columns and Bob's are given in the rows. Charlie's state, up to normalization, appears in the boxes. From the table it is clear that if Charlie knows what measurements Alice and Bob made (that is, *x* or *y*), he can determine whether their results are the same or opposite and also that he will gain no knowledge of what their results actually are. Similarly, Bob will not be able to determine what Alice's result is without Charlie's assistance because he does not know if his result is the same as Alice's or the opposite of hers.

To improve the efficiency of QSS, a protocol share the message directly among the users was proposed. The scheme made full use of entanglement swapping of Bell states and local operations. For detection of eavesdropping, the EPR pairs were divided into two parts: the checking parts and the encoding parts. After insuring the security of the quantum channel by measuring the checking particles in conjugate bases, the sender encoded her bits via the local unitary operations on the encoding parts. And the protocol is secure, and two Bell states can be used to share two bits message. And there is a scheme for multiparty quantum secret sharing which is based on EPR entangled state. In the scheme, the secret messages are imposed on the auxiliary particles, and the transmitted particles of EPR pairs do not carry any secret messages during the whole process of transmission. After both of the communicators reliably share the EPR entangled states, all the participants can securely share the secret messages of the sender. Because there is no particles that carrying the secret message being transmitted on the quantum channel during the process of transmission, the scheme can efficiently resist the eavesdropper's attack on secret message.

So, entanglement makes an important role in quantum secret sharing and many application fields of quantum information theory such as quantum teleportation, QKD, quantum com‐ puting need to use this entanglement feature. But the quantification of the entanglement receives a better solution only for bipartite quantum system, and the quantification of multi‐ partite entanglement is still open even for a pure multipartite state. Until now, a variety of different entanglement measures have been proposed for multipartite setting, such as the robustness of entanglement, the relative entropy of entanglement, and the geometric measure.

However, all these methods involve variable complexity problem, which make the quantifi‐ cation of multipartite entanglement very difficult. Fortunately, it is hopeful to obtain the exact value of the multipartite entanglement of graph states, which are very useful multipartite quantum states in quantum information processing. Graph states are the specific algorithm resources for one-way quantum computing model, and they are subsets of stabilizer states which are widely used in quantum error correction.

#### **5.2. QSS with qudit graph states**

1 1 2 2 1 1 2 2

1 1

we express the GHZ state in different ways. Noting that

136 Theory and Practice of Cryptography and Network Security Protocols and Technologies

1

y

we can write

1 2

state:

Bob

**Table 2.** QSS based on entanglement state [55].

*x yi x yi* += + += +

( 0 1 ), ( 0 1 ), ( 0 1 ), ( 0 1 ).

We can see the effects of measurements by Alice and Bob on the state of Charlie's particle if

2 2 [( )( 0 1 ) ( )( 0 1 )].

*xx xx*

= + + +- - +

*xx xx*

(|0 *<sup>c</sup>* + |1 *<sup>c</sup>*); if they get different results, he will have the state <sup>1</sup>

*ab abc c*

This decomposition of |*ψ* tells us what happens if both Alice and Bob make measurements in the *x* direction. If they both get the same result, then Charlie will have the state

determine which of these states he has by performing a measurement along the *x* direction. The following table summarizes the effects of Alice's and Bob's measurements on Charlie's

**Alice**

*+x* |0 + |1 |0 − |1 |0 −*i* |1 |0 + *i* |1


*+y* |0 −*i* |1 |0 + *i* |1 |0 − |1 |0 + |1


Alice's measurements are given in the columns and Bob's are given in the rows. Charlie's state, up to normalization, appears in the boxes. From the table it is clear that if Charlie knows what measurements Alice and Bob made (that is, *x* or *y*), he can determine whether their results are the same or opposite and also that he will gain no knowledge of what their results actually

*+x* -*x +y* -*y*

*ab abc c*


2 2 0 ( ), 1 ( ), = + +- = + -- *xx xx* (17)

+ + - +- + - (18)

2

(|0 *<sup>c</sup>* − |1 *<sup>c</sup>*). He can

The quantification of entanglement has attracted wide attention in recent years, but the quantification of the entanglement receives a better solution only for bipartite quantum system. And the quantification of multipartite entanglement is still open even for a pure multipartite state. Until now, a variety of different entanglement measures have been proposed for multipartite setting, such as the robustness of entanglement, the relative entropy of entangle‐ ment, and the quantification of multipartite entanglement is still open even for a pure multi‐ partite state. Fortunately, it is hopeful to obtain the exact value of the multipartite entanglement of graph states, which are useful multipartite quantum states in quantum information processing.

The entanglement quantification of graph state is relatively simple, for it can be described by graph language. So far, the study of graph state entanglement has just started, the latest research results is determining the upper and lower bounds of graph state entanglement by using local operation and classical communication, which can only confirm the entanglement of graph states that have equal bounds. But for graph states which have unequal bounds, it can only give a range of entanglement but not the exact value.

In quantum computing, a graph state is a special type of multi-qubit state that can be repre‐ sented by a graph. Each qubit is represented by a vertex of the graph, and there is an edge between every interacting pair of qubits. In particular, they are a convenient way of repre‐ senting certain types of entangled states.

Given a graph *G* =(*V* , *E*)with the set of vertices *V* and the set of edges *E*, the corresponding graph

$$\left| \{ \mathbf{G} \} \right\rangle = \prod\_{(a,b) \in E} \mathcal{U}^{\{a,b\}} \left| + \right\rangle^{\otimes V} \,, \tag{19}$$

the vertex *a* and all of its neighbors *b*∈ *Na*. The graph state |*G* is then defined as the simul‐

Here they consider three specific varieties of such schemes previously demonstrated in graph states. They note that all existing forms of secret sharing that have been proposed fall into one

**1.** CC scheme: The secret is classical, the dealer is connected to the player via private quantum channels and all players are connected by private classical channels.

**2.** CQ scheme: The secret is classical, the dealer shares public quantum channels with each

**3.** QQ scheme: The secret is quantum, the dealer shares either private or public quantum channels with each player and the players are connected to each other by private quantum

Now let's see an example of QSS with graph states. It is the third scenario presented in the previous QQ scheme. This QQ scheme proposed is readily generalisable to qudits. In this scheme, the secret to be shared is a quantum state |*s* in a *d*-dimensional Hilbert space now, initially possessed by the dealer, who distributes it to the other parties via a joint operation on the secret state and parties' shared graph state, in a manner analogous to quantum teleporta‐

1


*d D D i i s i* a

.

<sup>=</sup> å (23)

= + å (24)

0

The dealer prepares the state |*s <sup>D</sup>* |*G <sup>D</sup>*,*<sup>V</sup>* . Corresponding to some graph state *G* for the dealer's qudit *D* and all the players' qudits *V* . The dealer distributes the player's qudits to them. The dealer then measures her two qudits in the generalized Bell basis { |*ψ mn*}, where

*jj m <sup>d</sup>*

If the dealer's measurement result is (*m*, *n*), corresponding to the state |*ψ mn*, then it follows

=

<sup>1</sup> : *jn*

from the rules for projective measurement that the resultant state for all parties is

*j*

 w

*mn*

y

player and the players are connected to each by private classical channels.

(*a*)

}*a*∈*V* with eigenvalue 1:

( ) . *<sup>a</sup> KG G <sup>G</sup>* <sup>=</sup> (22)

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 139

taneous eigenstate of the *N* = |*V* | operators {*KG*

of these categories. [60]

or classical channels.

Denoting the dealer's secret qudit as

tion. We describe the general protocol explicitly below.

where the operator *U* {*a*,*b*} is the controlled-Z interaction between the two vertices (qubits) *a*, *b*,

*U* {*a*,*b*} = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 −1 .

And | + =<sup>1</sup> 2 (|0 + |1 ). With each graph *G* =(*V* , *E*), we associate a graph state. A graph state is a certain pure quantum state on a Hilbert space *HV* =(*<sup>C</sup>* <sup>2</sup> ) <sup>⊗</sup>*<sup>V</sup>* .

An alternative and equivalent definition is the following. Hence each vertex labels a two-level quantum system or qubit — a notion that can be extended to quantum systems of finite dimension *d*. To every vertex *a*∈*V* of the graph *G* =(*V* , *E*) is attached a Hermitian operator

$$K\_{\mathcal{G}}^{(a)} = \sigma\_x^{(a)} \prod\_{b \in N\_x} \sigma\_z^{(b)}. \tag{20}$$

In terms of the adjacency matrix, this can be expressed as

$$K\_G^{(a)} = \sigma\_x^{(a)} \prod\_{b \neq V} \left(\sigma\_z^{(b)}\right)^{\Gamma\_{ab}}.\tag{21}$$

As usual, the matrices *σ<sup>x</sup>* (*a*) , *σ<sup>y</sup>* (*a*) , *σ<sup>z</sup>* (*a*) are the Pauli matrices, where the upper index specifies the Hilbert space on which the operator acts *KG* (*a*) is an observable of the qubits associated with the vertex *a* and all of its neighbors *b*∈ *Na*. The graph state |*G* is then defined as the simul‐ taneous eigenstate of the *N* = |*V* | operators {*KG* (*a*) }*a*∈*V* with eigenvalue 1:

$$K\_G^{(a)} \left| G \right> = \left| G \right>\tag{22}$$

Here they consider three specific varieties of such schemes previously demonstrated in graph states. They note that all existing forms of secret sharing that have been proposed fall into one of these categories. [60]


Now let's see an example of QSS with graph states. It is the third scenario presented in the previous QQ scheme. This QQ scheme proposed is readily generalisable to qudits. In this scheme, the secret to be shared is a quantum state |*s* in a *d*-dimensional Hilbert space now, initially possessed by the dealer, who distributes it to the other parties via a joint operation on the secret state and parties' shared graph state, in a manner analogous to quantum teleporta‐ tion. We describe the general protocol explicitly below.

Denoting the dealer's secret qudit as

The entanglement quantification of graph state is relatively simple, for it can be described by graph language. So far, the study of graph state entanglement has just started, the latest research results is determining the upper and lower bounds of graph state entanglement by using local operation and classical communication, which can only confirm the entanglement of graph states that have equal bounds. But for graph states which have unequal bounds, it

In quantum computing, a graph state is a special type of multi-qubit state that can be repre‐ sented by a graph. Each qubit is represented by a vertex of the graph, and there is an edge between every interacting pair of qubits. In particular, they are a convenient way of repre‐

Given a graph *G* =(*V* , *E*)with the set of vertices *V* and the set of edges *E*, the corresponding

{,}

An alternative and equivalent definition is the following. Hence each vertex labels a two-level quantum system or qubit — a notion that can be extended to quantum systems of finite dimension *d*. To every vertex *a*∈*V* of the graph *G* =(*V* , *E*) is attached a Hermitian operator

() () ().

() () () ( ). *ab aa b Gx z b V*

Î

 s

(*a*)

G

s

Î

*aa b Gx z b N*

s

*a*

 s

, *a b <sup>V</sup>*

= + Õ (19)

) <sup>⊗</sup>*<sup>V</sup>* .

<sup>=</sup> Õ (20)

<sup>=</sup> Õ (21)

are the Pauli matrices, where the upper index specifies

is an observable of the qubits associated with

is the controlled-Z interaction between the two vertices (qubits) *a*, *b*,

(|0 + |1 ). With each graph *G* =(*V* , *E*), we associate a graph state. A graph

(,)

state is a certain pure quantum state on a Hilbert space *HV* =(*<sup>C</sup>* <sup>2</sup>

*K*

*K*

In terms of the adjacency matrix, this can be expressed as

(*a*) , *σ<sup>y</sup>* (*a*) , *σ<sup>z</sup>* (*a*)

the Hilbert space on which the operator acts *KG*

*ab E G U* <sup>Ä</sup> Î

can only give a range of entanglement but not the exact value.

138 Theory and Practice of Cryptography and Network Security Protocols and Technologies

senting certain types of entangled states.

.

graph

*U* {*a*,*b*} =

And | + =<sup>1</sup>

where the operator *U* {*a*,*b*}

2

As usual, the matrices *σ<sup>x</sup>*

$$\left\| s \right\rangle\_{D} = \sum\_{i=0}^{d-1} \alpha\_{i} \left| i \right\rangle\_{D}. \tag{23}$$

The dealer prepares the state |*s <sup>D</sup>* |*G <sup>D</sup>*,*<sup>V</sup>* . Corresponding to some graph state *G* for the dealer's qudit *D* and all the players' qudits *V* . The dealer distributes the player's qudits to them. The dealer then measures her two qudits in the generalized Bell basis { |*ψ mn*}, where

$$\left| \left| \nu\_{\,\,\,mn} \right> \right| \coloneqq \frac{1}{\sqrt{d}} \sum\_{j} \alpha^{jn} \left| j \right> \left| j + m \right> \tag{24}$$

If the dealer's measurement result is (*m*, *n*), corresponding to the state |*ψ mn*, then it follows from the rules for projective measurement that the resultant state for all parties is

$$\begin{aligned} \left| \left| \boldsymbol{\nu}\_{\boldsymbol{m} \mathbf{m}} \right\rangle\_{\boldsymbol{D}} \left\langle \boldsymbol{\nu}\_{\boldsymbol{m} \mathbf{m}} \right| \left| \left| \mathbf{s} \right\rangle\_{\boldsymbol{D}} \right| \mathbf{G} \right\rangle\_{\boldsymbol{D}, \boldsymbol{V}}\\ \propto \left\| \left| \boldsymbol{\nu}\_{\boldsymbol{m} \mathbf{m}} \right\rangle\_{\boldsymbol{D}} \sum \boldsymbol{a}\_{j} \boldsymbol{o} e^{-j \boldsymbol{n}} \right\| \boldsymbol{g}\_{z = (j+\boldsymbol{m}) (A\_{\rm D1}, A\_{\rm D2}, \cdots, A\_{\rm DN})} \Big\}\_{\boldsymbol{V}} \end{aligned} \tag{25}$$

where | *gz* is the encoded reduced graph state on the players 1, ⋯, *n* with labels *z*.

If the dealer informs the players of their measurement result (*m*, *n*), then a set of players ∈*V* can apply a correction operator

$$\mathbf{U}\_{mn} \coloneqq \mathbf{K}\_a^{-n\mathbf{N}\_{D\_x}^{-1}} \mathbf{Z}^{-mA\_D} \tag{26}$$

to achieve secrecy. Currently post-quantum cryptography is mostly focused on four differ‐

)1(1 *b* ,

)1(1 *b* bits, etc.

**Figure 4.** Post-quantum cryptography. Sizes and times are simplified to *b*1+ο(1), *b*2+ο(1), etc. Optimization of any specif‐

)1(1 *b* polynomials,

)1(1 *b* polynomials

)1(1 *b* -bit hash,

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 141

Functioning cryptographic systems:

Cryptographers: How can we encrypt, decrypt, sign, verify, etc.?

 Merkle–Hellman knapsack encryption, Buchmann - Williams class-group encryption,

> Cryptanalysts: What can an attacker do using *<sup>b</sup>* 2 operations on a quantum computer?

 DES, Triple DES, AES, RSA, McEliece encryption, Merkle hash-tree signatures,

ECDSA, HFE *<sup>v</sup>* , NTRU, etc.

Unbroken cryptographic systems:

Merkle signatures with "strong"

 Most efficient unbroken cryptosystems: e.g., can verify signature in time 3 (1) *b*

Users

**2.** Multivariate cryptography such as unbalanced oil and vinegar;

Algorithm designers and implementors: Exactly how small and fast are the unbroken cryptosystems?

McEliece with code length

AES (for *b* 128),

HFE *<sup>v</sup>* with

NTRU with

using HFE *<sup>v</sup>* with

**1.** Lattice-based cryptography such as NTRU and GGH;

ic b requires a more detailed analysis.

28

**3.** Hash-based signatures such as Lamport signatures and Merkle signature scheme;

ent approaches:

to obtain the state

$$\left\lVert \mathbf{s}\_{\mathcal{S}} \right\rVert^{V} = \sum\_{j} \alpha\_{j} \Big| \mathbf{g}\_{z = j(A\_{D1,} A\_{D2}, \dots, A\_{DN})} \Big\rangle\_{V} \,. \tag{27}$$

The access properties of this final state depend on the graph state used. Qualitatively, for certain initial graph states, the state |*sg <sup>V</sup>* can be regarded as a superposition of orthogonal labelled graph states whose labels have the same access structure as CC protocols. Thus, the ability to recover the quantum secret corresponds to the ability to recover these classical labels, providing a natural extension of the classical protocols to the quantum case.

#### **6. Post-quantum cryptography**

Post-quantum cryptography deals with cryptosystems that run on conventional computers and are secure against attacks by quantum computers. This field came about because most currently popular public-key cryptosystems rely on the integer factorization problem or discrete logarithm problem, both of which would be easily solvable on large enough quantum computers using Shor's algorithm. Even though current publicly known experimental quantum computing is nowhere near powerful enough to attack real cryptosystems, many cryptographers are researching new algorithms, in case quantum computing becomes a threat in the future.

In contrast, most current symmetric cryptography (symmetric ciphers and hash functions) is secure from quantum computers. The quantum Grover's algorithm can speed up attacks against symmetric ciphers, but this can be counteracted by increasing key size. Thus postquantum cryptography does not focus on symmetric algorithms. Post-quantum cryptogra‐ phy is also unrelated to quantum cryptography, which refers to using quantum phenomena to achieve secrecy. Currently post-quantum cryptography is mostly focused on four differ‐ ent approaches:

1 2


<sup>=</sup> <sup>=</sup> å <sup>L</sup> (27)

*<sup>V</sup>* can be regarded as a superposition of orthogonal

(25)

( )( , , , ) *D D DN*

,

*mn j z j m A A A <sup>D</sup> <sup>V</sup>*

If the dealer informs the players of their measurement result (*m*, *n*), then a set of players ∈*V*

1 : *Da <sup>D</sup> nN mA UK Z mn a*

*<sup>g</sup> j z jA A A <sup>V</sup> <sup>j</sup>*

The access properties of this final state depend on the graph state used. Qualitatively, for

labelled graph states whose labels have the same access structure as CC protocols. Thus, the ability to recover the quantum secret corresponds to the ability to recover these classical labels,

Post-quantum cryptography deals with cryptosystems that run on conventional computers and are secure against attacks by quantum computers. This field came about because most currently popular public-key cryptosystems rely on the integer factorization problem or discrete logarithm problem, both of which would be easily solvable on large enough quantum computers using Shor's algorithm. Even though current publicly known experimental quantum computing is nowhere near powerful enough to attack real cryptosystems, many cryptographers are researching new algorithms, in case quantum computing becomes a threat

In contrast, most current symmetric cryptography (symmetric ciphers and hash functions) is secure from quantum computers. The quantum Grover's algorithm can speed up attacks against symmetric ciphers, but this can be counteracted by increasing key size. Thus postquantum cryptography does not focus on symmetric algorithms. Post-quantum cryptogra‐ phy is also unrelated to quantum cryptography, which refers to using quantum phenomena

1, 2 ( ,, ) . *D D DN*

*mn mn D D DV jn*

 y

140 Theory and Practice of Cryptography and Network Security Protocols and Technologies

 aw- <sup>µ</sup> å = + <sup>L</sup>

y

y

*V*

*s g* a

providing a natural extension of the classical protocols to the quantum case.

can apply a correction operator

certain initial graph states, the state |*sg*

**6. Post-quantum cryptography**

to obtain the state

in the future.

*s G*

where | *gz* is the encoded reduced graph state on the players 1, ⋯, *n* with labels *z*.

*g*

**Figure 4.** Post-quantum cryptography. Sizes and times are simplified to *b*1+ο(1), *b*2+ο(1), etc. Optimization of any specif‐ ic b requires a more detailed analysis.


28

**4.** Code-based cryptography that relies on error-correcting codes, such McEliece encryption and Niederreiter signatures.

[2] Bennett, C. H, Bessette, F, & Brassard, G. *et al.*, "Experimental quantum cryptogra‐ phy," in Proceedings of the workshop on the theory and application of cryptographic

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 143

[3] Bennett, C. H, Brassard, G, & Crepeau, C. *et al.*, "Practical Quantum Oblivious Trans‐ fer," in Proceedings of the 11th Annual International Cryptology Conference on Ad‐

[4] Brassard, G, Crepeau, C, & Jozsa, R. *et al.*, "A quantum bit commitment scheme prov‐ ably unbreakable by both parties," in Proceedings of the 1993 IEEE 34th Annual

[6] Shor, P. W. Polynomial-Time Algorithms for Prime Factorization and Discrete Loga‐

[7] Bernstein, D. J. Introduction to post-quantum cryptography " *Post-quantum cryptogra‐*

[8] Townsend, P. D, Rarity, J. G, & Tapster, P. R. Single photon interference in a 10 km long optical fibre interferometer," *Electronics Letters,* (1993). , 29(7), 634-635.

[9] Townsend, P. D, Rarity, J. G, & Tapster, P. R. Enhanced single photon fringe visibility in a 10 km-long prototype quantum cryptography channel," *Electronics Letters,*

[10] Einstein, A, Podolsky, B, & Rosen, N. Can Quantum-Mechanical Description of Phys‐ ical Reality Be Considered Complete?," *Physical Review,* (1935). , 47(10), 777-780.

[12] Horodecki, R, Horodecki, P, & Horodecki, M. *et al.*, "Quantum entanglement," *Re‐*

[13] Jaeger, G, Shimony, A, & Vaidman, L. Two interferometric complementarities," *Phys‐*

[14] Vernam, G. S. Cipher Printing Telegraph Systems For Secret Wire and Radio Tele‐ graphic Communications," *American Institute of Electrical Engineers, Transactions of the,*

[15] Bennett, C. H, & Brassard, G. Quantum cryptography: Public key distribution and coin tossing}," in Proceedings of IEEE International Conference on Computers, Sys‐

[16] Mayers, D. Unconditional security in quantum cryptography," *J. ACM,* (2001). , 48(3),

[17] Lo, H. -K, & Chau, H. F. Unconditional Security of Quantum Key Distribution over Arbitrarily Long Distances," *Science,* March 26, 1999, (1999). , 283(5410), 2050-2056.

[5] Lo, H. -K, & Zhao, Y. Quantum Cryptography," http://arxiv.org/abs/0803.2507/.

rithms on a Quantum Computer," http://arxiv.org/abs/quant-ph/9508027.

techniques on Advances in cryptology, Aarhus, Denmark, (1991). , 253-265.

vances in Cryptology, (1992). , 351-366.

*phy*, (2009).

(1993). , 29(14), 1291-1293.

[11] Kumar, M. *Quantum*: London : Icon books, (2009).

*views of Modern Physics,* (2009). , 81(2), 865-942.

tems, and Signal Processing, India, (1984). , 175.

*ical Review A,* (1995). , 51(1), 54-67.

vol. XLV, (1926). , 295-301.

351-406.

Foundations of Computer Science, (1993). , 362-371.

We can use the following figure to show the content of post-quantum cryptography clearly [7].

Post-quantum cryptography is, in general, a quite different topic from quantum cryptography:


#### **Acknowledgements**

This work was conducted when Xiaoqing Tan visited the University of Toronto and is supported by the NSFC 61003258. She especially thanks Hoi-Kwong Lo for the hospitality during her stay at the University of Toronto.

#### **Author details**

Xiaoqing Tan\*

Address all correspondence to: ttanxq@jnu.edu.cn

Dept. of Mathematics, Jinan University, Guangzhou, Guangdong, China

#### **References**

[1] Wiesner, S. Conjugate coding," *Sigact News,* (1983). , 15(1), 78-88.


**4.** Code-based cryptography that relies on error-correcting codes, such McEliece encryption

We can use the following figure to show the content of post-quantum cryptography clearly [7].

Post-quantum cryptography is, in general, a quite different topic from quantum cryptography: **•** Post-quantum cryptography, like the rest of cryptography, covers a wide range of securecommunication tasks, ranging from secret-key operations, public-key signatures, and public-key encryption to high-level operations such as secure electronic voting. Quantum cryptography handles only one task, namely expanding a short shared secret into a long

**•** Post-quantum cryptography, like the rest of cryptography, includes some systems proven to be secure, but also includes many lower-cost systems that are conjectured to be secure. Quantum cryptography rejects conjectural systems — begging the question of how Alice

**•** Post-quantum cryptography includes many systems that can be used for a noticeable fraction of today's Internet communication—Alice and Bob need to perform some compu‐ tation and send some data but do not need any new hardware. Quantum cryptography requires new network hardware that is, at least for the moment, impossibly expensive for

This work was conducted when Xiaoqing Tan visited the University of Toronto and is supported by the NSFC 61003258. She especially thanks Hoi-Kwong Lo for the hospitality

and Niederreiter signatures.

the vast majority of Internet users.

during her stay at the University of Toronto.

Address all correspondence to: ttanxq@jnu.edu.cn

Dept. of Mathematics, Jinan University, Guangzhou, Guangdong, China

[1] Wiesner, S. Conjugate coding," *Sigact News,* (1983). , 15(1), 78-88.

**Acknowledgements**

**Author details**

Xiaoqing Tan\*

**References**

and Bob can securely share a secret in the first place.

142 Theory and Practice of Cryptography and Network Security Protocols and Technologies

shared secret.


[18] Stebila, D, Mosca, M, & Lütkenhaus, N. The Case for Quantum Key Distribution," *Quantum Communication and Quantum Networking*, Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering A. Ser‐ gienko, S. Pascazio and P. Villoresi, eds., Springer Berlin Heidelberg, (2010). , 283-296.

[32] Biham, E, & Mor, T. Security of Quantum Cryptography against Collective Attacks,"

Introduction to Quantum Cryptography http://dx.doi.org/10.5772/56092 145

[33] Biham, E, & Mor, T. Bounds on Information and the Security of Quantum Cryptogra‐

[34] Gisin, N, Ribordy, G, & Tittel, W. *et al.*, "Quantum cryptography," *Reviews of Modern*

[35] Barrett, J, Hardy, L, & Kent, A. No signaling and quantum key distribution," *Physical*

[36] Brassard, G. Brief history of quantum cryptography: a personal perspective." , 19-23.

[37] Zhao, Y, Fung, C. -H. F, & Qi, B. *et al.*, "Quantum hacking: Experimental demonstra‐ tion of time-shift attack against practical quantum-key-distribution systems," *Physical*

[38] Hwang, W. -Y. Quantum Key Distribution with High Loss: Toward Global Secure

[39] Mayers, D. Unconditionally Secure Quantum Bit Commitment is Impossible," *Physi‐*

[40] Pironio, S, Acín, A, & Brunner, N. *et al.*, "Device-independent quantum key distribu‐ tion secure against collective attacks," *New Journal of Physics,* (2009). , 11(4), 045021.

[41] Bennett, C. H, & Di, D. P. . Smolin *et al.*, "Mixed-state entanglement and quantum

[42] Deutsch, D, Ekert, A, & Jozsa, R. *et al.*, "Quantum Privacy Amplification and the Se‐ curity of Quantum Cryptography over Noisy Channels," *Physical Review Letters,*

[43] Shor, P. W, & Preskill, J. Simple Proof of Security of the BB84 Quantum Key Distribu‐

[44] Biham, E, Boyer, M, & Boykin, P. O. *et al.*, "A proof of the security of quantum key distribution (extended abstract)," in Proceedings of the thirty-second annual ACM symposium on Theory of computing, Portland, Oregon, United States, (2000). ,

[45] Ben-or, M. (2002). http://www.msri.org/publications/ln/msri/2002/qip/ben-or/1/

[46] Renner, R, & Koenig, R. Universally composable privacy amplification against quan‐

[47] Renner, R. Security of Quantum Key Distribution," http://arxiv.org/abs/quant-ph/

error correction," *Physical Review A,* vol. 54, no. 5, pp. 3824-3851, 1996.

tion Protocol," *Physical Review Letters,* (2000). , 85(2), 441-444.

Communication," *Physical Review Letters,* (2003). , 91(5), 057901.

*Physical Review Letters,* (1997). , 78(11), 2256-2259.

*Physics,* (2002). , 74(1), 145-195.

*Review Letters,* Jul 1, (2005). , 95(1)

*Review A,* (2008). , 78(4), 042333.

(1996). , 77(13), 2818-2821.

715-724.

index.html.

0512258.

tum adversaries."

*cal Review Letters,* (1997). , 78(17), 3414-3417.

phy," *Physical Review Letters,* (1997). , 79(20), 4034-4037.


[32] Biham, E, & Mor, T. Security of Quantum Cryptography against Collective Attacks," *Physical Review Letters,* (1997). , 78(11), 2256-2259.

[18] Stebila, D, Mosca, M, & Lütkenhaus, N. The Case for Quantum Key Distribution," *Quantum Communication and Quantum Networking*, Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering A. Ser‐ gienko, S. Pascazio and P. Villoresi, eds., Springer Berlin Heidelberg, (2010). ,

[19] Ekert, A. K. Quantum cryptography based on Bell's theorem," *Physical Review Letters,*

[20] Bennett, C. H, Brassard, G, & Mermin, N. D. Quantum cryptography without Bell's

[21] Grosshans, F, Van Assche, G, & Wenger, J. *et al.*, "Quantum key distribution using gaussian-modulated coherent states," *Nature,* Jan 16, (2003). , 421(6920), 238-241.

[22] Bing QiLi Qian, and H.-K. Lo. "A brief introduction of quantum cryptography for en‐

[23] Renner, R, & Cirac, J. I. de Finetti Representation Theorem for Infinite-Dimensional Quantum Systems and Applications to Quantum Cryptography," *Physical Review Let‐*

[24] Lodewyck, J, Bloch, M, & Garcia-patron, R. *et al.*, "Quantum key distribution over 25 km with an all-fiber continuous-variable system," *Physical Review A,* Oct, (2007). ,

[25] Qi, B, Huang, L. -L, & Qian, L. *et al.*, "Experimental study on the Gaussian-modulat‐ ed coherent-state quantum key distribution over standard telecommunication fi‐

[26] Ekert, A. Complex and unpredictable Cardano," *International Journal of Theoretical*

[27] Van Dam, W, Ariano, G. M. D, & Ekert, A. *et al.*, "Optimal phase estimation in quan‐ tum networks," *Journal of Physics a-Mathematical and Theoretical,* Jul 13, (2007). , 40(28),

[28] Christandl, M, Datta, N, & Ekert, A. *et al.*, "Perfect state transfer in quantum spin net‐

[29] Lo, H. K, Ma, X. F, & Chen, K. Decoy state quantum key distribution," *Physical Re‐*

[30] Curty, M, Gühne, O, & Lewenstein, M. *et al.*, "Detecting two-party quantum correla‐ tions in quantum-key-distribution protocols," *Physical Review A,* (2005). , 71(2),

[31] Lütkenhaus, N. Security against eavesdropping in quantum cryptography," *Physical*

theorem," *Physical Review Letters,* (1992). , 68(5), 557-559.

144 Theory and Practice of Cryptography and Network Security Protocols and Technologies

gineers," http://arxiv.org/abs/1002.1237.

bers," *Physical Review A,* (2007). , 76(5), 052323.

works," *Physical Review Letters,* May 7, (2004). , 92(18)

*Physics,* Aug, (2008). , 47(8), 2101-2119.

*view Letters,* Jun 17, (2005). , 94(23)

*Review A,* (1996). , 54(1), 97-111.

*ters,* Mar 20, (2009). , 102(11)

283-296.

76(4)

7971-7984.

022306.

(1991). , 67(6), 661-663.


[48] Renner, R. Symmetry of large physical systems implies independence of subsys‐

[49] Horodecki, K, Horodecki, M, & Horodecki, P. *et al.*, "Secure Key from Bound Entan‐

[50] Karol HorodeckiMichal Horodecki, Pawel Horodecki *et al.* "Quantum key distribu‐ tion based on private states: unconditional security over untrusted channels with

[51] Koashi, M. Complementarity, distillable secret key, and distillable entanglement,"

[52] Acín, A, Brunner, N, & Gisin, N. *et al.*, "Device-Independent Security of Quantum Cryptography against Collective Attacks," *Physical Review Letters,* (2007). , 98(23),

[53] LlMasanes, R. Renner, M. Christandl *et al.*, "Unconditional security of key distribu‐

[54] Tittel, W, Zbinden, H, & Gisin, N. Experimental demonstration of quantum secret

[55] Hillery, M, Bužek, V, & Berthiaume, A. Quantum secret sharing," *Physical Review A,*

[56] Cleve, R, Gottesman, D, & Lo, H. -K. How to Share a Quantum Secret," *Physical Re‐*

[57] Tyc, T, & Sanders, B. C. How to share a continuous-variable quantum secret by opti‐

[58] Guo, G. -P, & Guo, G. -C. Quantum secret sharing without entanglement," *Physics*

[59] Xiao, L, Lu, G, Long, F, & Deng, G. *et al.*, "Efficient multiparty quantum-secret-shar‐

[60] Keet, A, Fortescue, B, & Markham, D. *et al.*, "Quantum secret sharing with qudit

tems," *Nat Phys,* (2007). , 3(9), 645-649.

146 Theory and Practice of Cryptography and Network Security Protocols and Technologies

tion from causality constraints," (2006).

(1999). , 59(3), 1829-1834.

*view Letters,* (1999). , 83(3), 648-651.

*Letters A,* (2003). , 310(4), 247-251.

sharing," *Physical Review A,* (2001). , 63(4), 042301.

cal interferometry," *Physical Review A,* Apr, (2002). , 65(4)

ing schemes," *Physical Review A,* (2004). , 69(5), 052307.

graph states," *Physical Review A,* (2010). , 82(6), 062315.

(2007).

230501.

glement," *Physical Review Letters,* (2005). , 94(16), 160502.

zero quantum capacity," http://arxiv.org/abs/quant-ph/0608195.

### *Edited by Jaydip Sen*

In an age of explosive worldwide growth of electronic data storage and communications, effective protection of information has become a critical requirement. When used in coordination with other tools for ensuring information security, cryptography in all of its applications, including data confidentiality, data integrity, and user authentication, is a most powerful tool for protecting information. This book presents a collection of research work in the field of cryptography. It discusses some of the critical challenges that are being faced by the current computing world and also describes some mechanisms to defend against these challenges. It is a valuable source of knowledge for researchers, engineers, graduate and doctoral students working in the field of cryptography. It will also be useful for faculty members of graduate schools and universities.

Photo by cybrain / iStock

Theory and Practice of Cryptography and Network Security Protocols and Technologies

Theory and Practice of

Cryptography and Network

Security Protocols and

Technologies

*Edited by Jaydip Sen*